Dec 16 02:04:34.166953 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 02:04:34.166970 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 02:04:34.166976 kernel: KASLR enabled Dec 16 02:04:34.166980 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 02:04:34.166985 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 02:04:34.166989 kernel: efi: EFI v2.7 by EDK II Dec 16 02:04:34.166995 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 02:04:34.166999 kernel: random: crng init done Dec 16 02:04:34.167003 kernel: secureboot: Secure boot disabled Dec 16 02:04:34.167007 kernel: ACPI: Early table checksum verification disabled Dec 16 02:04:34.167011 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 02:04:34.167015 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 02:04:34.167019 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 02:04:34.167024 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 02:04:34.167030 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 02:04:34.167034 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 02:04:34.167039 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 02:04:34.167044 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 02:04:34.167049 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 02:04:34.167053 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 02:04:34.167057 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 02:04:34.167062 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 02:04:34.167066 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 02:04:34.167071 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 02:04:34.167075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 02:04:34.167079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 02:04:34.167084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 02:04:34.167089 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 02:04:34.167093 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 02:04:34.167098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 02:04:34.167102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 02:04:34.167106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 02:04:34.167111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 02:04:34.167115 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 02:04:34.167120 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 02:04:34.167124 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 02:04:34.167128 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 02:04:34.167133 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 02:04:34.167138 kernel: Zone ranges: Dec 16 02:04:34.167142 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 02:04:34.167149 kernel: DMA32 empty Dec 16 02:04:34.167153 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 02:04:34.167158 kernel: Device empty Dec 16 02:04:34.167163 kernel: Movable zone start for each node Dec 16 02:04:34.167168 kernel: Early memory node ranges Dec 16 02:04:34.167173 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 02:04:34.167177 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 02:04:34.167182 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 02:04:34.167187 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 02:04:34.168814 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 02:04:34.168832 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 02:04:34.168838 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 02:04:34.168847 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 02:04:34.168852 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 02:04:34.168857 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 02:04:34.168862 kernel: psci: probing for conduit method from ACPI. Dec 16 02:04:34.168866 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 02:04:34.168871 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 02:04:34.168876 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 02:04:34.168880 kernel: psci: SMC Calling Convention v1.4 Dec 16 02:04:34.168885 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 02:04:34.168890 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 02:04:34.168894 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 02:04:34.168899 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 02:04:34.168905 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 02:04:34.168910 kernel: Detected PIPT I-cache on CPU0 Dec 16 02:04:34.168915 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 02:04:34.168920 kernel: CPU features: detected: GIC system register CPU interface Dec 16 02:04:34.168924 kernel: CPU features: detected: Spectre-v4 Dec 16 02:04:34.168929 kernel: CPU features: detected: Spectre-BHB Dec 16 02:04:34.168934 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 02:04:34.168939 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 02:04:34.168943 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 02:04:34.168948 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 02:04:34.168953 kernel: alternatives: applying boot alternatives Dec 16 02:04:34.168959 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:04:34.168964 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 02:04:34.168969 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 02:04:34.168974 kernel: Fallback order for Node 0: 0 Dec 16 02:04:34.168978 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 02:04:34.168983 kernel: Policy zone: Normal Dec 16 02:04:34.168988 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 02:04:34.168992 kernel: software IO TLB: area num 2. Dec 16 02:04:34.168997 kernel: software IO TLB: mapped [mem 0x0000000037370000-0x000000003b370000] (64MB) Dec 16 02:04:34.169002 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 02:04:34.169007 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 02:04:34.169013 kernel: rcu: RCU event tracing is enabled. Dec 16 02:04:34.169017 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 02:04:34.169022 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 02:04:34.169027 kernel: Tracing variant of Tasks RCU enabled. Dec 16 02:04:34.169032 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 02:04:34.169036 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 02:04:34.169041 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 02:04:34.169046 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 02:04:34.169051 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 02:04:34.169055 kernel: GICv3: 960 SPIs implemented Dec 16 02:04:34.169061 kernel: GICv3: 0 Extended SPIs implemented Dec 16 02:04:34.169065 kernel: Root IRQ handler: gic_handle_irq Dec 16 02:04:34.169070 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 02:04:34.169074 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 02:04:34.169079 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 02:04:34.169084 kernel: ITS: No ITS available, not enabling LPIs Dec 16 02:04:34.169089 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 02:04:34.169093 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 02:04:34.169098 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 02:04:34.169103 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 02:04:34.169108 kernel: Console: colour dummy device 80x25 Dec 16 02:04:34.169114 kernel: printk: legacy console [tty1] enabled Dec 16 02:04:34.169119 kernel: ACPI: Core revision 20240827 Dec 16 02:04:34.169124 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 02:04:34.169129 kernel: pid_max: default: 32768 minimum: 301 Dec 16 02:04:34.169134 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 02:04:34.169139 kernel: landlock: Up and running. Dec 16 02:04:34.169144 kernel: SELinux: Initializing. Dec 16 02:04:34.169150 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 02:04:34.169155 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 02:04:34.169160 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 02:04:34.169165 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 02:04:34.169173 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 02:04:34.169179 kernel: rcu: Hierarchical SRCU implementation. Dec 16 02:04:34.169184 kernel: rcu: Max phase no-delay instances is 400. Dec 16 02:04:34.169189 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 02:04:34.169195 kernel: Remapping and enabling EFI services. Dec 16 02:04:34.169200 kernel: smp: Bringing up secondary CPUs ... Dec 16 02:04:34.169206 kernel: Detected PIPT I-cache on CPU1 Dec 16 02:04:34.169211 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 02:04:34.169216 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 02:04:34.169222 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 02:04:34.169227 kernel: SMP: Total of 2 processors activated. Dec 16 02:04:34.169233 kernel: CPU: All CPU(s) started at EL1 Dec 16 02:04:34.169238 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 02:04:34.169243 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 02:04:34.169248 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 02:04:34.169254 kernel: CPU features: detected: Common not Private translations Dec 16 02:04:34.169260 kernel: CPU features: detected: CRC32 instructions Dec 16 02:04:34.169265 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 02:04:34.169270 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 02:04:34.169275 kernel: CPU features: detected: LSE atomic instructions Dec 16 02:04:34.169281 kernel: CPU features: detected: Privileged Access Never Dec 16 02:04:34.169286 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 02:04:34.169291 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 02:04:34.169297 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 02:04:34.169302 kernel: CPU features: detected: Scalable Vector Extension Dec 16 02:04:34.169307 kernel: alternatives: applying system-wide alternatives Dec 16 02:04:34.169313 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 02:04:34.169318 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 02:04:34.169323 kernel: SVE: default vector length 16 bytes per vector Dec 16 02:04:34.169328 kernel: Memory: 3979900K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 193072K reserved, 16384K cma-reserved) Dec 16 02:04:34.169335 kernel: devtmpfs: initialized Dec 16 02:04:34.169340 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 02:04:34.169345 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 02:04:34.169350 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 02:04:34.169355 kernel: 0 pages in range for non-PLT usage Dec 16 02:04:34.169361 kernel: 515168 pages in range for PLT usage Dec 16 02:04:34.169366 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 02:04:34.169372 kernel: SMBIOS 3.1.0 present. Dec 16 02:04:34.169377 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 02:04:34.169382 kernel: DMI: Memory slots populated: 2/2 Dec 16 02:04:34.169387 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 02:04:34.169393 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 02:04:34.169398 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 02:04:34.169403 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 02:04:34.169408 kernel: audit: initializing netlink subsys (disabled) Dec 16 02:04:34.169414 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 16 02:04:34.169419 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 02:04:34.169425 kernel: cpuidle: using governor menu Dec 16 02:04:34.169430 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 02:04:34.169435 kernel: ASID allocator initialised with 32768 entries Dec 16 02:04:34.169440 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 02:04:34.169446 kernel: Serial: AMBA PL011 UART driver Dec 16 02:04:34.169451 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 02:04:34.169457 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 02:04:34.169462 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 02:04:34.169467 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 02:04:34.169472 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 02:04:34.169478 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 02:04:34.169483 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 02:04:34.169489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 02:04:34.169494 kernel: ACPI: Added _OSI(Module Device) Dec 16 02:04:34.169499 kernel: ACPI: Added _OSI(Processor Device) Dec 16 02:04:34.169504 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 02:04:34.169509 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 02:04:34.169515 kernel: ACPI: Interpreter enabled Dec 16 02:04:34.169520 kernel: ACPI: Using GIC for interrupt routing Dec 16 02:04:34.169526 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 02:04:34.169531 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 02:04:34.169536 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 02:04:34.169541 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 02:04:34.169547 kernel: ACPI: CPU0 has been hot-added Dec 16 02:04:34.169552 kernel: ACPI: CPU1 has been hot-added Dec 16 02:04:34.169557 kernel: iommu: Default domain type: Translated Dec 16 02:04:34.169563 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 02:04:34.169568 kernel: efivars: Registered efivars operations Dec 16 02:04:34.169573 kernel: vgaarb: loaded Dec 16 02:04:34.169579 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 02:04:34.169584 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 02:04:34.169589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 02:04:34.169594 kernel: pnp: PnP ACPI init Dec 16 02:04:34.169600 kernel: pnp: PnP ACPI: found 0 devices Dec 16 02:04:34.169605 kernel: NET: Registered PF_INET protocol family Dec 16 02:04:34.169610 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 02:04:34.169616 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 02:04:34.169621 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 02:04:34.169626 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 02:04:34.169632 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 02:04:34.169638 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 02:04:34.169643 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 02:04:34.169648 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 02:04:34.169653 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 02:04:34.169658 kernel: PCI: CLS 0 bytes, default 64 Dec 16 02:04:34.169664 kernel: kvm [1]: HYP mode not available Dec 16 02:04:34.169669 kernel: Initialise system trusted keyrings Dec 16 02:04:34.169674 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 02:04:34.169680 kernel: Key type asymmetric registered Dec 16 02:04:34.169685 kernel: Asymmetric key parser 'x509' registered Dec 16 02:04:34.169690 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 02:04:34.169696 kernel: io scheduler mq-deadline registered Dec 16 02:04:34.169701 kernel: io scheduler kyber registered Dec 16 02:04:34.169706 kernel: io scheduler bfq registered Dec 16 02:04:34.169711 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 02:04:34.169717 kernel: thunder_xcv, ver 1.0 Dec 16 02:04:34.169722 kernel: thunder_bgx, ver 1.0 Dec 16 02:04:34.169727 kernel: nicpf, ver 1.0 Dec 16 02:04:34.169732 kernel: nicvf, ver 1.0 Dec 16 02:04:34.169892 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 02:04:34.169962 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T02:04:30 UTC (1765850670) Dec 16 02:04:34.169970 kernel: efifb: probing for efifb Dec 16 02:04:34.169976 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 02:04:34.169981 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 02:04:34.169987 kernel: efifb: scrolling: redraw Dec 16 02:04:34.169992 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 02:04:34.169997 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 02:04:34.170002 kernel: fb0: EFI VGA frame buffer device Dec 16 02:04:34.170008 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 02:04:34.170014 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 02:04:34.170019 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 02:04:34.170024 kernel: watchdog: NMI not fully supported Dec 16 02:04:34.170029 kernel: watchdog: Hard watchdog permanently disabled Dec 16 02:04:34.170034 kernel: NET: Registered PF_INET6 protocol family Dec 16 02:04:34.170040 kernel: Segment Routing with IPv6 Dec 16 02:04:34.170045 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 02:04:34.170051 kernel: NET: Registered PF_PACKET protocol family Dec 16 02:04:34.170056 kernel: Key type dns_resolver registered Dec 16 02:04:34.170061 kernel: registered taskstats version 1 Dec 16 02:04:34.170066 kernel: Loading compiled-in X.509 certificates Dec 16 02:04:34.170072 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 02:04:34.170077 kernel: Demotion targets for Node 0: null Dec 16 02:04:34.170083 kernel: Key type .fscrypt registered Dec 16 02:04:34.170088 kernel: Key type fscrypt-provisioning registered Dec 16 02:04:34.170093 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 02:04:34.170098 kernel: ima: Allocated hash algorithm: sha1 Dec 16 02:04:34.170103 kernel: ima: No architecture policies found Dec 16 02:04:34.170109 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 02:04:34.170114 kernel: clk: Disabling unused clocks Dec 16 02:04:34.170119 kernel: PM: genpd: Disabling unused power domains Dec 16 02:04:34.170125 kernel: Freeing unused kernel memory: 12480K Dec 16 02:04:34.170130 kernel: Run /init as init process Dec 16 02:04:34.170135 kernel: with arguments: Dec 16 02:04:34.170140 kernel: /init Dec 16 02:04:34.170145 kernel: with environment: Dec 16 02:04:34.170150 kernel: HOME=/ Dec 16 02:04:34.170155 kernel: TERM=linux Dec 16 02:04:34.170161 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 02:04:34.170166 kernel: SCSI subsystem initialized Dec 16 02:04:34.170171 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 02:04:34.170177 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 02:04:34.170259 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 02:04:34.170267 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 02:04:34.170273 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 02:04:34.170279 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 02:04:34.170284 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 02:04:34.170289 kernel: PTP clock support registered Dec 16 02:04:34.170294 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 02:04:34.170300 kernel: hv_vmbus: registering driver hv_utils Dec 16 02:04:34.170305 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 02:04:34.170311 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 02:04:34.170316 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 02:04:34.170321 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 02:04:34.170416 kernel: scsi host0: storvsc_host_t Dec 16 02:04:34.170492 kernel: scsi host1: storvsc_host_t Dec 16 02:04:34.170581 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 02:04:34.170662 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 02:04:34.170735 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 02:04:34.171768 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Dec 16 02:04:34.171879 kernel: sd 1:0:0:0: [sda] Write Protect is off Dec 16 02:04:34.171958 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 02:04:34.172033 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 02:04:34.172120 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#82 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 02:04:34.172191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#89 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 02:04:34.172198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 02:04:34.172271 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Dec 16 02:04:34.172347 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Dec 16 02:04:34.172355 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 02:04:34.172427 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Dec 16 02:04:34.172434 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 02:04:34.172439 kernel: device-mapper: uevent: version 1.0.3 Dec 16 02:04:34.172444 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 02:04:34.172450 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 02:04:34.172455 kernel: raid6: neonx8 gen() 18544 MB/s Dec 16 02:04:34.172461 kernel: raid6: neonx4 gen() 18577 MB/s Dec 16 02:04:34.172466 kernel: raid6: neonx2 gen() 17093 MB/s Dec 16 02:04:34.172472 kernel: raid6: neonx1 gen() 15133 MB/s Dec 16 02:04:34.172477 kernel: raid6: int64x8 gen() 10570 MB/s Dec 16 02:04:34.172482 kernel: raid6: int64x4 gen() 10618 MB/s Dec 16 02:04:34.172487 kernel: raid6: int64x2 gen() 8986 MB/s Dec 16 02:04:34.172492 kernel: raid6: int64x1 gen() 7048 MB/s Dec 16 02:04:34.172497 kernel: raid6: using algorithm neonx4 gen() 18577 MB/s Dec 16 02:04:34.172504 kernel: raid6: .... xor() 15142 MB/s, rmw enabled Dec 16 02:04:34.172509 kernel: raid6: using neon recovery algorithm Dec 16 02:04:34.172514 kernel: xor: measuring software checksum speed Dec 16 02:04:34.172519 kernel: 8regs : 28589 MB/sec Dec 16 02:04:34.172524 kernel: 32regs : 28729 MB/sec Dec 16 02:04:34.172530 kernel: arm64_neon : 37405 MB/sec Dec 16 02:04:34.172535 kernel: xor: using function: arm64_neon (37405 MB/sec) Dec 16 02:04:34.172541 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 02:04:34.172546 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (398) Dec 16 02:04:34.172552 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 02:04:34.172557 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:04:34.172562 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 02:04:34.172568 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 02:04:34.172573 kernel: loop: module loaded Dec 16 02:04:34.172579 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 02:04:34.172584 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 02:04:34.172590 systemd[1]: Successfully made /usr/ read-only. Dec 16 02:04:34.172597 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:04:34.172603 systemd[1]: Detected virtualization microsoft. Dec 16 02:04:34.172609 systemd[1]: Detected architecture arm64. Dec 16 02:04:34.172615 systemd[1]: Running in initrd. Dec 16 02:04:34.172621 systemd[1]: No hostname configured, using default hostname. Dec 16 02:04:34.172627 systemd[1]: Hostname set to . Dec 16 02:04:34.172633 systemd[1]: Initializing machine ID from random generator. Dec 16 02:04:34.172638 systemd[1]: Queued start job for default target initrd.target. Dec 16 02:04:34.172644 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:04:34.172650 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:04:34.172656 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:04:34.172662 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 02:04:34.172668 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:04:34.172674 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 02:04:34.172680 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 02:04:34.172686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:04:34.172692 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:04:34.172698 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:04:34.172703 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:04:34.172709 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:04:34.172714 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:04:34.172720 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:04:34.172726 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:04:34.172732 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:04:34.172738 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:04:34.172743 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 02:04:34.172749 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 02:04:34.172755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:04:34.172765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:04:34.172771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:04:34.172777 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:04:34.172792 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 02:04:34.172799 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 02:04:34.172806 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:04:34.172812 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 02:04:34.172819 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 02:04:34.172824 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 02:04:34.172830 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:04:34.172836 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:04:34.172856 systemd-journald[535]: Collecting audit messages is enabled. Dec 16 02:04:34.172871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:04:34.172877 systemd-journald[535]: Journal started Dec 16 02:04:34.172891 systemd-journald[535]: Runtime Journal (/run/log/journal/1583bd57de414fd0abc10e4f89c72ad1) is 8M, max 78.3M, 70.3M free. Dec 16 02:04:34.197386 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:04:34.198065 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 02:04:34.218582 kernel: audit: type=1130 audit(1765850674.196:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.216644 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:04:34.241029 kernel: audit: type=1130 audit(1765850674.213:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.260231 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 02:04:34.260255 kernel: audit: type=1130 audit(1765850674.240:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.253836 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 02:04:34.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.279225 kernel: audit: type=1130 audit(1765850674.266:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.269930 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 02:04:34.294079 kernel: Bridge firewalling registered Dec 16 02:04:34.287937 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:04:34.293606 systemd-modules-load[538]: Inserted module 'br_netfilter' Dec 16 02:04:34.312121 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:04:34.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.333644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:04:34.341045 kernel: audit: type=1130 audit(1765850674.315:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.346886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:04:34.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.362797 systemd-tmpfiles[547]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 02:04:34.380139 kernel: audit: type=1130 audit(1765850674.350:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.375168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 02:04:34.387929 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:04:34.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.416232 kernel: audit: type=1130 audit(1765850674.398:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.411329 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:04:34.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.434587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:04:34.455412 kernel: audit: type=1130 audit(1765850674.421:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.455429 kernel: audit: type=1130 audit(1765850674.438:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.456000 audit: BPF prog-id=6 op=LOAD Dec 16 02:04:34.458924 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:04:34.471277 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:04:34.484738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:04:34.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.495647 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:04:34.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.505939 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 02:04:34.593276 systemd-resolved[564]: Positive Trust Anchors: Dec 16 02:04:34.593289 systemd-resolved[564]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:04:34.593292 systemd-resolved[564]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:04:34.593311 systemd-resolved[564]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:04:34.618070 systemd-resolved[564]: Defaulting to hostname 'linux'. Dec 16 02:04:34.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.618679 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:04:34.646908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:04:34.667060 dracut-cmdline[576]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:04:34.805812 kernel: Loading iSCSI transport class v2.0-870. Dec 16 02:04:34.865815 kernel: iscsi: registered transport (tcp) Dec 16 02:04:34.899733 kernel: iscsi: registered transport (qla4xxx) Dec 16 02:04:34.899754 kernel: QLogic iSCSI HBA Driver Dec 16 02:04:34.968455 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:04:34.992455 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:04:35.007032 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 16 02:04:35.007054 kernel: audit: type=1130 audit(1765850674.997:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:34.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.021237 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:04:35.064776 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 02:04:35.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.088821 kernel: audit: type=1130 audit(1765850675.068:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.087940 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 02:04:35.105915 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 02:04:35.127032 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:04:35.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.152000 audit: BPF prog-id=7 op=LOAD Dec 16 02:04:35.157213 kernel: audit: type=1130 audit(1765850675.137:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.157243 kernel: audit: type=1334 audit(1765850675.152:18): prog-id=7 op=LOAD Dec 16 02:04:35.157251 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:04:35.172329 kernel: audit: type=1334 audit(1765850675.155:19): prog-id=8 op=LOAD Dec 16 02:04:35.155000 audit: BPF prog-id=8 op=LOAD Dec 16 02:04:35.251978 systemd-udevd[791]: Using default interface naming scheme 'v257'. Dec 16 02:04:35.257701 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:04:35.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.275273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:04:35.301427 kernel: audit: type=1130 audit(1765850675.262:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.301447 kernel: audit: type=1130 audit(1765850675.289:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.302257 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 02:04:35.316000 audit: BPF prog-id=9 op=LOAD Dec 16 02:04:35.320536 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:04:35.328708 kernel: audit: type=1334 audit(1765850675.316:22): prog-id=9 op=LOAD Dec 16 02:04:35.339905 dracut-pre-trigger[914]: rd.md=0: removing MD RAID activation Dec 16 02:04:35.359926 systemd-networkd[915]: lo: Link UP Dec 16 02:04:35.362688 systemd-networkd[915]: lo: Gained carrier Dec 16 02:04:35.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.363210 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:04:35.405365 kernel: audit: type=1130 audit(1765850675.366:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.405386 kernel: audit: type=1130 audit(1765850675.389:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.367750 systemd[1]: Reached target network.target - Network. Dec 16 02:04:35.383817 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:04:35.391986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:04:35.450647 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:04:35.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.462607 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 02:04:35.536641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:04:35.558596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#54 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 02:04:35.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.536776 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:04:35.545571 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:04:35.551894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:04:35.576412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:04:35.578629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:04:35.595069 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 02:04:35.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.597541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:04:35.626059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:04:35.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:35.674622 kernel: hv_netvsc 002248bb-5770-0022-48bb-5770002248bb eth0: VF slot 1 added Dec 16 02:04:35.677918 systemd-networkd[915]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:04:35.677925 systemd-networkd[915]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:04:35.681563 systemd-networkd[915]: eth0: Link UP Dec 16 02:04:35.681893 systemd-networkd[915]: eth0: Gained carrier Dec 16 02:04:35.681903 systemd-networkd[915]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:04:35.717240 kernel: hv_vmbus: registering driver hv_pci Dec 16 02:04:35.717262 kernel: hv_pci a1ab5040-b76e-4ba4-9235-bcfb0c51ba2f: PCI VMBus probing: Using version 0x10004 Dec 16 02:04:35.743169 kernel: hv_pci a1ab5040-b76e-4ba4-9235-bcfb0c51ba2f: PCI host bridge to bus b76e:00 Dec 16 02:04:35.743386 kernel: pci_bus b76e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 02:04:35.743493 kernel: pci_bus b76e:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 02:04:35.764256 kernel: pci b76e:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 02:04:35.769800 kernel: pci b76e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 02:04:35.773833 kernel: pci b76e:00:02.0: enabling Extended Tags Dec 16 02:04:35.787834 kernel: pci b76e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b76e:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 02:04:35.797330 kernel: pci_bus b76e:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 02:04:35.797501 kernel: pci b76e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 02:04:35.800869 systemd-networkd[915]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 02:04:35.998832 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 02:04:36.010954 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 02:04:36.026842 kernel: mlx5_core b76e:00:02.0: enabling device (0000 -> 0002) Dec 16 02:04:36.035163 kernel: mlx5_core b76e:00:02.0: PTM is not supported by PCIe Dec 16 02:04:36.035329 kernel: mlx5_core b76e:00:02.0: firmware version: 16.30.5006 Dec 16 02:04:36.128212 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 02:04:36.168558 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 02:04:36.198140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 02:04:36.252126 kernel: hv_netvsc 002248bb-5770-0022-48bb-5770002248bb eth0: VF registering: eth1 Dec 16 02:04:36.252321 kernel: mlx5_core b76e:00:02.0 eth1: joined to eth0 Dec 16 02:04:36.264815 kernel: mlx5_core b76e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 02:04:36.277666 systemd-networkd[915]: eth1: Interface name change detected, renamed to enP46958s1. Dec 16 02:04:36.284242 kernel: mlx5_core b76e:00:02.0 enP46958s1: renamed from eth1 Dec 16 02:04:36.346859 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 02:04:36.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:36.351899 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:04:36.360423 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:04:36.370039 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:04:36.382965 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 02:04:36.409823 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:04:36.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:36.426801 kernel: mlx5_core b76e:00:02.0 enP46958s1: Link up Dec 16 02:04:36.462801 kernel: hv_netvsc 002248bb-5770-0022-48bb-5770002248bb eth0: Data path switched to VF: enP46958s1 Dec 16 02:04:36.462759 systemd-networkd[915]: enP46958s1: Link UP Dec 16 02:04:36.686029 systemd-networkd[915]: enP46958s1: Gained carrier Dec 16 02:04:37.230831 disk-uuid[1027]: Warning: The kernel is still using the old partition table. Dec 16 02:04:37.230831 disk-uuid[1027]: The new table will be used at the next reboot or after you Dec 16 02:04:37.230831 disk-uuid[1027]: run partprobe(8) or kpartx(8) Dec 16 02:04:37.230831 disk-uuid[1027]: The operation has completed successfully. Dec 16 02:04:37.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:37.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:37.242108 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 02:04:37.242233 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 02:04:37.249944 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 02:04:37.305810 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1188) Dec 16 02:04:37.316457 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:04:37.316482 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:04:37.352954 kernel: BTRFS info (device sda6): turning on async discard Dec 16 02:04:37.352991 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 02:04:37.362803 kernel: BTRFS info (device sda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:04:37.363146 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 02:04:37.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:37.368414 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 02:04:37.517989 systemd-networkd[915]: eth0: Gained IPv6LL Dec 16 02:04:38.477135 ignition[1207]: Ignition 2.24.0 Dec 16 02:04:38.477150 ignition[1207]: Stage: fetch-offline Dec 16 02:04:38.479847 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:04:38.483819 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:04:38.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:38.479862 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 02:04:38.492847 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 02:04:38.479982 ignition[1207]: parsed url from cmdline: "" Dec 16 02:04:38.479985 ignition[1207]: no config URL provided Dec 16 02:04:38.479988 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 02:04:38.479998 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Dec 16 02:04:38.480001 ignition[1207]: failed to fetch config: resource requires networking Dec 16 02:04:38.480134 ignition[1207]: Ignition finished successfully Dec 16 02:04:38.521848 ignition[1214]: Ignition 2.24.0 Dec 16 02:04:38.521854 ignition[1214]: Stage: fetch Dec 16 02:04:38.522077 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:04:38.522084 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 02:04:38.522159 ignition[1214]: parsed url from cmdline: "" Dec 16 02:04:38.522162 ignition[1214]: no config URL provided Dec 16 02:04:38.522172 ignition[1214]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 02:04:38.522177 ignition[1214]: no config at "/usr/lib/ignition/user.ign" Dec 16 02:04:38.522193 ignition[1214]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 02:04:38.592187 ignition[1214]: GET result: OK Dec 16 02:04:38.592266 ignition[1214]: config has been read from IMDS userdata Dec 16 02:04:38.592279 ignition[1214]: parsing config with SHA512: b8a8d3268401a7249870854d9face1586e6cc009c1bbdf393d8c53f915c71c7a2823e88d9204c20c85b8b5c6dab284f19316f8e229c0320019023e46435a1e30 Dec 16 02:04:38.597137 unknown[1214]: fetched base config from "system" Dec 16 02:04:38.597451 ignition[1214]: fetch: fetch complete Dec 16 02:04:38.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:38.597143 unknown[1214]: fetched base config from "system" Dec 16 02:04:38.597455 ignition[1214]: fetch: fetch passed Dec 16 02:04:38.597146 unknown[1214]: fetched user config from "azure" Dec 16 02:04:38.597495 ignition[1214]: Ignition finished successfully Dec 16 02:04:38.601402 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 02:04:38.608431 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 02:04:38.640099 ignition[1221]: Ignition 2.24.0 Dec 16 02:04:38.640111 ignition[1221]: Stage: kargs Dec 16 02:04:38.640296 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:04:38.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:38.646333 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 02:04:38.640303 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 02:04:38.652054 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 02:04:38.640959 ignition[1221]: kargs: kargs passed Dec 16 02:04:38.641002 ignition[1221]: Ignition finished successfully Dec 16 02:04:38.675006 ignition[1227]: Ignition 2.24.0 Dec 16 02:04:38.678938 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 02:04:38.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:38.675011 ignition[1227]: Stage: disks Dec 16 02:04:38.686320 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 02:04:38.675230 ignition[1227]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:04:38.696169 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 02:04:38.675237 ignition[1227]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 02:04:38.704319 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:04:38.675890 ignition[1227]: disks: disks passed Dec 16 02:04:38.713480 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:04:38.675933 ignition[1227]: Ignition finished successfully Dec 16 02:04:38.721430 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:04:38.731957 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 02:04:38.851521 systemd-fsck[1235]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 02:04:38.861905 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 02:04:38.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:38.873493 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 02:04:39.180803 kernel: EXT4-fs (sda9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 02:04:39.181555 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 02:04:39.186018 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 02:04:39.224915 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:04:39.232749 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 02:04:39.241327 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 02:04:39.251388 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 02:04:39.255851 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:04:39.266748 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 02:04:39.276476 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 02:04:39.304730 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1249) Dec 16 02:04:39.304762 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:04:39.304797 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:04:39.319665 kernel: BTRFS info (device sda6): turning on async discard Dec 16 02:04:39.319703 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 02:04:39.320692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:04:39.961247 coreos-metadata[1251]: Dec 16 02:04:39.960 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 02:04:39.969091 coreos-metadata[1251]: Dec 16 02:04:39.969 INFO Fetch successful Dec 16 02:04:39.973482 coreos-metadata[1251]: Dec 16 02:04:39.969 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 02:04:39.982047 coreos-metadata[1251]: Dec 16 02:04:39.978 INFO Fetch successful Dec 16 02:04:39.998747 coreos-metadata[1251]: Dec 16 02:04:39.998 INFO wrote hostname ci-4547.0.0-a-de7f477aa9 to /sysroot/etc/hostname Dec 16 02:04:40.006078 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 02:04:40.019861 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 16 02:04:40.019880 kernel: audit: type=1130 audit(1765850680.010:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:40.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:41.322648 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 02:04:41.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:41.347734 kernel: audit: type=1130 audit(1765850681.326:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:41.343906 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 02:04:41.356279 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 02:04:41.377879 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 02:04:41.387795 kernel: BTRFS info (device sda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:04:41.401182 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 02:04:41.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:41.413881 ignition[1354]: INFO : Ignition 2.24.0 Dec 16 02:04:41.425849 kernel: audit: type=1130 audit(1765850681.408:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:41.425870 ignition[1354]: INFO : Stage: mount Dec 16 02:04:41.425870 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:04:41.425870 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 02:04:41.425870 ignition[1354]: INFO : mount: mount passed Dec 16 02:04:41.425870 ignition[1354]: INFO : Ignition finished successfully Dec 16 02:04:41.462472 kernel: audit: type=1130 audit(1765850681.428:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:41.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:41.416130 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 02:04:41.429686 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 02:04:41.476131 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:04:41.505803 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1364) Dec 16 02:04:41.515614 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:04:41.515635 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:04:41.525357 kernel: BTRFS info (device sda6): turning on async discard Dec 16 02:04:41.525382 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 02:04:41.526734 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:04:41.554640 ignition[1381]: INFO : Ignition 2.24.0 Dec 16 02:04:41.554640 ignition[1381]: INFO : Stage: files Dec 16 02:04:41.560775 ignition[1381]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:04:41.560775 ignition[1381]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 02:04:41.560775 ignition[1381]: DEBUG : files: compiled without relabeling support, skipping Dec 16 02:04:41.574592 ignition[1381]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 02:04:41.574592 ignition[1381]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 02:04:41.651552 ignition[1381]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 02:04:41.657259 ignition[1381]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 02:04:41.657259 ignition[1381]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 02:04:41.654248 unknown[1381]: wrote ssh authorized keys file for user: core Dec 16 02:04:41.680596 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 02:04:41.688515 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 02:04:41.721959 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 02:04:41.817629 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 02:04:41.825632 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 02:04:41.825632 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 02:04:41.825632 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:04:41.825632 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:04:41.825632 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:04:41.825632 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:04:41.825632 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:04:41.825632 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:04:41.884095 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:04:41.884095 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:04:41.884095 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 02:04:41.884095 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 02:04:41.884095 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 02:04:41.884095 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 02:04:42.481880 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 02:04:44.530062 ignition[1381]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 02:04:44.530062 ignition[1381]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 02:04:44.768799 ignition[1381]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:04:44.781972 ignition[1381]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:04:44.781972 ignition[1381]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 02:04:44.781972 ignition[1381]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 02:04:44.806956 ignition[1381]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 02:04:44.806956 ignition[1381]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:04:44.806956 ignition[1381]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:04:44.806956 ignition[1381]: INFO : files: files passed Dec 16 02:04:44.806956 ignition[1381]: INFO : Ignition finished successfully Dec 16 02:04:44.860026 kernel: audit: type=1130 audit(1765850684.810:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:44.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:44.801870 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 02:04:44.812248 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 02:04:44.896038 kernel: audit: type=1130 audit(1765850684.869:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:44.896062 kernel: audit: type=1131 audit(1765850684.869:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:44.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:44.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:44.848519 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 02:04:44.859102 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 02:04:44.859190 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 02:04:44.911978 initrd-setup-root-after-ignition[1413]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:04:44.911978 initrd-setup-root-after-ignition[1413]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:04:44.925058 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:04:44.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:44.919506 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:04:44.948656 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 02:04:44.962366 kernel: audit: type=1130 audit(1765850684.929:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:44.962948 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 02:04:45.010144 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 02:04:45.010222 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 02:04:45.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.018896 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 02:04:45.054469 kernel: audit: type=1130 audit(1765850685.017:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.054490 kernel: audit: type=1131 audit(1765850685.017:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.051867 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 02:04:45.059251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 02:04:45.060014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 02:04:45.098088 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:04:45.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.120297 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 02:04:45.131273 kernel: audit: type=1130 audit(1765850685.102:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.143736 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:04:45.143836 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:04:45.154153 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:04:45.164048 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 02:04:45.172435 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 02:04:45.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.172529 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:04:45.198114 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 02:04:45.211600 kernel: audit: type=1131 audit(1765850685.180:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.207347 systemd[1]: Stopped target basic.target - Basic System. Dec 16 02:04:45.215460 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 02:04:45.223897 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:04:45.233386 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 02:04:45.243139 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:04:45.252433 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 02:04:45.262128 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:04:45.271325 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 02:04:45.281326 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 02:04:45.290591 systemd[1]: Stopped target swap.target - Swaps. Dec 16 02:04:45.298249 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 02:04:45.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.298356 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:04:45.330100 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:04:45.340043 kernel: audit: type=1131 audit(1765850685.306:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.334895 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:04:45.345596 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 02:04:45.350604 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:04:45.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.357273 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 02:04:45.357369 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 02:04:45.412754 kernel: audit: type=1131 audit(1765850685.365:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.412776 kernel: audit: type=1131 audit(1765850685.395:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.386235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 02:04:45.434949 kernel: audit: type=1131 audit(1765850685.416:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.386335 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:04:45.458312 kernel: audit: type=1131 audit(1765850685.439:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.396116 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 02:04:45.396186 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 02:04:45.417266 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 02:04:45.417345 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 02:04:45.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.508100 ignition[1437]: INFO : Ignition 2.24.0 Dec 16 02:04:45.508100 ignition[1437]: INFO : Stage: umount Dec 16 02:04:45.508100 ignition[1437]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:04:45.508100 ignition[1437]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 02:04:45.508100 ignition[1437]: INFO : umount: umount passed Dec 16 02:04:45.508100 ignition[1437]: INFO : Ignition finished successfully Dec 16 02:04:45.559034 kernel: audit: type=1131 audit(1765850685.488:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.458376 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 02:04:45.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.471241 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 02:04:45.481863 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 02:04:45.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.482028 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:04:45.489174 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 02:04:45.489291 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:04:45.515154 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 02:04:45.515236 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:04:45.527698 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 02:04:45.527779 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 02:04:45.537277 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 02:04:45.537356 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 02:04:45.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.547016 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 02:04:45.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.547054 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 02:04:45.555392 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 02:04:45.555424 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 02:04:45.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.563198 systemd[1]: Stopped target network.target - Network. Dec 16 02:04:45.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.571726 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 02:04:45.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.571802 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:04:45.580838 systemd[1]: Stopped target paths.target - Path Units. Dec 16 02:04:45.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.589966 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 02:04:45.734000 audit: BPF prog-id=9 op=UNLOAD Dec 16 02:04:45.734000 audit: BPF prog-id=6 op=UNLOAD Dec 16 02:04:45.595194 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:04:45.607187 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 02:04:45.615926 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 02:04:45.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.624465 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 02:04:45.624530 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:04:45.632888 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 02:04:45.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.632920 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:04:45.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.640890 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 02:04:45.640907 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:04:45.649415 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 02:04:45.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.649461 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 02:04:45.657107 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 02:04:45.657137 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 02:04:45.665456 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 02:04:45.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.673813 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 02:04:45.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.684128 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 02:04:45.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.684683 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 02:04:45.684766 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 02:04:45.693721 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 02:04:45.693928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 02:04:45.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.702673 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 02:04:45.924382 kernel: hv_netvsc 002248bb-5770-0022-48bb-5770002248bb eth0: Data path switched from VF: enP46958s1 Dec 16 02:04:45.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.702744 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 02:04:45.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.715820 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 02:04:45.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.715911 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 02:04:45.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.733194 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 02:04:45.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.740101 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 02:04:45.740142 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:04:45.751696 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 02:04:45.751755 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 02:04:45.760445 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 02:04:45.773138 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 02:04:45.773197 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:04:45.782207 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 02:04:45.782242 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:04:45.790649 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 02:04:45.790689 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 02:04:45.795961 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:04:45.812536 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 02:04:45.812629 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:04:45.822875 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 02:04:46.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:45.822936 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 02:04:45.837185 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 02:04:45.837214 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:04:45.845911 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 02:04:45.845953 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:04:45.859551 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 02:04:45.859600 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 02:04:45.868565 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 02:04:45.868606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:04:45.878488 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 02:04:45.893801 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 02:04:45.893864 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:04:45.903479 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 02:04:45.903539 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:04:45.920423 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 02:04:45.920473 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:04:45.929661 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 02:04:45.929697 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:04:45.939945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:04:45.939986 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:04:45.950238 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 02:04:45.951813 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 02:04:46.019858 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 02:04:46.152427 systemd-journald[535]: Received SIGTERM from PID 1 (systemd). Dec 16 02:04:46.019986 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 02:04:46.029496 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 02:04:46.037848 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 02:04:46.059772 systemd[1]: Switching root. Dec 16 02:04:46.167104 systemd-journald[535]: Journal stopped Dec 16 02:04:51.205948 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 02:04:51.205967 kernel: SELinux: policy capability open_perms=1 Dec 16 02:04:51.205975 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 02:04:51.205981 kernel: SELinux: policy capability always_check_network=0 Dec 16 02:04:51.205987 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 02:04:51.205993 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 02:04:51.205999 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 02:04:51.206005 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 02:04:51.206011 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 02:04:51.206018 systemd[1]: Successfully loaded SELinux policy in 192.535ms. Dec 16 02:04:51.206026 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.533ms. Dec 16 02:04:51.206033 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:04:51.206040 systemd[1]: Detected virtualization microsoft. Dec 16 02:04:51.206046 systemd[1]: Detected architecture arm64. Dec 16 02:04:51.206053 systemd[1]: Detected first boot. Dec 16 02:04:51.206060 systemd[1]: Hostname set to . Dec 16 02:04:51.206066 systemd[1]: Initializing machine ID from random generator. Dec 16 02:04:51.206073 zram_generator::config[1480]: No configuration found. Dec 16 02:04:51.206080 kernel: NET: Registered PF_VSOCK protocol family Dec 16 02:04:51.206087 systemd[1]: Populated /etc with preset unit settings. Dec 16 02:04:51.206093 kernel: kauditd_printk_skb: 38 callbacks suppressed Dec 16 02:04:51.206099 kernel: audit: type=1334 audit(1765850690.308:96): prog-id=12 op=LOAD Dec 16 02:04:51.206105 kernel: audit: type=1334 audit(1765850690.308:97): prog-id=3 op=UNLOAD Dec 16 02:04:51.206111 kernel: audit: type=1334 audit(1765850690.311:98): prog-id=13 op=LOAD Dec 16 02:04:51.206117 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 02:04:51.206124 kernel: audit: type=1334 audit(1765850690.312:99): prog-id=14 op=LOAD Dec 16 02:04:51.206130 kernel: audit: type=1334 audit(1765850690.312:100): prog-id=4 op=UNLOAD Dec 16 02:04:51.206136 kernel: audit: type=1334 audit(1765850690.312:101): prog-id=5 op=UNLOAD Dec 16 02:04:51.206143 kernel: audit: type=1131 audit(1765850690.317:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.206149 kernel: audit: type=1334 audit(1765850690.358:103): prog-id=12 op=UNLOAD Dec 16 02:04:51.206155 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 02:04:51.206163 kernel: audit: type=1130 audit(1765850690.372:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.206169 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 02:04:51.206176 kernel: audit: type=1131 audit(1765850690.372:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.206183 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 02:04:51.206189 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 02:04:51.206196 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 02:04:51.206203 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 02:04:51.206211 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 02:04:51.206218 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 02:04:51.206226 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 02:04:51.206233 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 02:04:51.206239 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:04:51.206246 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:04:51.206253 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 02:04:51.206260 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 02:04:51.206266 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 02:04:51.206273 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:04:51.206279 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 02:04:51.206286 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:04:51.206294 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:04:51.206300 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 02:04:51.206307 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 02:04:51.206314 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 02:04:51.206320 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 02:04:51.206327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:04:51.206334 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:04:51.206341 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 02:04:51.206347 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:04:51.206354 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:04:51.206361 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 02:04:51.206367 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 02:04:51.206375 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 02:04:51.206382 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:04:51.206388 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 02:04:51.206395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:04:51.206402 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 02:04:51.206409 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 02:04:51.206416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:04:51.206422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:04:51.206429 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 02:04:51.206435 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 02:04:51.206442 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 02:04:51.206449 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 02:04:51.206456 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 02:04:51.206462 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 02:04:51.206469 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 02:04:51.206475 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 02:04:51.206482 systemd[1]: Reached target machines.target - Containers. Dec 16 02:04:51.206489 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 02:04:51.206497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:04:51.206503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:04:51.206510 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 02:04:51.206516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:04:51.206523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:04:51.206529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:04:51.206537 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 02:04:51.206543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:04:51.206550 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 02:04:51.206557 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 02:04:51.206563 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 02:04:51.206570 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 02:04:51.206576 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 02:04:51.206584 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:04:51.206591 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:04:51.206597 kernel: fuse: init (API version 7.41) Dec 16 02:04:51.206604 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:04:51.206610 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:04:51.206617 kernel: ACPI: bus type drm_connector registered Dec 16 02:04:51.206623 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 02:04:51.206641 systemd-journald[1569]: Collecting audit messages is enabled. Dec 16 02:04:51.206656 systemd-journald[1569]: Journal started Dec 16 02:04:51.206671 systemd-journald[1569]: Runtime Journal (/run/log/journal/fe742495bf5846e690c61c4b4d6b4d8e) is 8M, max 78.3M, 70.3M free. Dec 16 02:04:50.697000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 02:04:51.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.088000 audit: BPF prog-id=14 op=UNLOAD Dec 16 02:04:51.088000 audit: BPF prog-id=13 op=UNLOAD Dec 16 02:04:51.089000 audit: BPF prog-id=15 op=LOAD Dec 16 02:04:51.089000 audit: BPF prog-id=16 op=LOAD Dec 16 02:04:51.089000 audit: BPF prog-id=17 op=LOAD Dec 16 02:04:51.203000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 02:04:51.203000 audit[1569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff8994c70 a2=4000 a3=0 items=0 ppid=1 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:04:51.203000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 02:04:50.294613 systemd[1]: Queued start job for default target multi-user.target. Dec 16 02:04:50.313648 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 02:04:50.317830 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 02:04:50.318115 systemd[1]: systemd-journald.service: Consumed 2.600s CPU time. Dec 16 02:04:51.225583 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 02:04:51.238447 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:04:51.248900 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:04:51.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.249724 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 02:04:51.254019 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 02:04:51.258978 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 02:04:51.263404 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 02:04:51.268289 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 02:04:51.273320 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 02:04:51.278232 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:04:51.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.283519 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 02:04:51.283652 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 02:04:51.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.289070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:04:51.289208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:04:51.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.294386 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:04:51.294510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:04:51.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.299274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:04:51.299408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:04:51.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.304649 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 02:04:51.304772 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 02:04:51.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.310258 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:04:51.310378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:04:51.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.315307 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:04:51.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.320468 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:04:51.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.326771 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 02:04:51.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.332976 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 02:04:51.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.338702 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:04:51.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.352997 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:04:51.358252 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 02:04:51.364377 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 02:04:51.377874 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 02:04:51.382624 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 02:04:51.382707 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:04:51.388102 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 02:04:51.409576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:04:51.409759 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:04:51.410766 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 02:04:51.424456 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 02:04:51.429241 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:04:51.430111 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 02:04:51.435018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:04:51.435931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:04:51.441917 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 02:04:51.447931 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 02:04:51.456020 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 02:04:51.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.460859 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 02:04:51.466570 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 02:04:51.482813 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 02:04:51.484987 systemd-journald[1569]: Time spent on flushing to /var/log/journal/fe742495bf5846e690c61c4b4d6b4d8e is 10.796ms for 1090 entries. Dec 16 02:04:51.484987 systemd-journald[1569]: System Journal (/var/log/journal/fe742495bf5846e690c61c4b4d6b4d8e) is 8M, max 2.2G, 2.2G free. Dec 16 02:04:51.531245 systemd-journald[1569]: Received client request to flush runtime journal. Dec 16 02:04:51.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.492045 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 02:04:51.498258 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 02:04:51.512880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:04:51.532227 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 02:04:51.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.540500 systemd-tmpfiles[1622]: ACLs are not supported, ignoring. Dec 16 02:04:51.540513 systemd-tmpfiles[1622]: ACLs are not supported, ignoring. Dec 16 02:04:51.543095 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:04:51.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.550628 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 02:04:51.561862 kernel: loop1: detected capacity change from 0 to 211168 Dec 16 02:04:51.590491 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 02:04:51.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.615816 kernel: loop2: detected capacity change from 0 to 45344 Dec 16 02:04:51.716045 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 02:04:51.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.722000 audit: BPF prog-id=18 op=LOAD Dec 16 02:04:51.722000 audit: BPF prog-id=19 op=LOAD Dec 16 02:04:51.722000 audit: BPF prog-id=20 op=LOAD Dec 16 02:04:51.724686 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 02:04:51.729000 audit: BPF prog-id=21 op=LOAD Dec 16 02:04:51.730889 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:04:51.735858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:04:51.749000 audit: BPF prog-id=22 op=LOAD Dec 16 02:04:51.749000 audit: BPF prog-id=23 op=LOAD Dec 16 02:04:51.749000 audit: BPF prog-id=24 op=LOAD Dec 16 02:04:51.752922 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 02:04:51.757145 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Dec 16 02:04:51.757406 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Dec 16 02:04:51.757000 audit: BPF prog-id=25 op=LOAD Dec 16 02:04:51.757000 audit: BPF prog-id=26 op=LOAD Dec 16 02:04:51.757000 audit: BPF prog-id=27 op=LOAD Dec 16 02:04:51.759262 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 02:04:51.766061 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:04:51.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.788475 systemd-nsresourced[1642]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 02:04:51.789524 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 02:04:51.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.794687 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 02:04:51.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.908939 systemd-oomd[1639]: No swap; memory pressure usage will be degraded Dec 16 02:04:51.909613 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 02:04:51.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:51.949968 systemd-resolved[1640]: Positive Trust Anchors: Dec 16 02:04:51.949979 systemd-resolved[1640]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:04:51.949982 systemd-resolved[1640]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:04:51.950001 systemd-resolved[1640]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:04:52.022806 kernel: loop3: detected capacity change from 0 to 100192 Dec 16 02:04:52.080428 systemd-resolved[1640]: Using system hostname 'ci-4547.0.0-a-de7f477aa9'. Dec 16 02:04:52.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:52.081755 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:04:52.086874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:04:52.123261 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 02:04:52.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:52.128000 audit: BPF prog-id=8 op=UNLOAD Dec 16 02:04:52.128000 audit: BPF prog-id=7 op=UNLOAD Dec 16 02:04:52.128000 audit: BPF prog-id=28 op=LOAD Dec 16 02:04:52.128000 audit: BPF prog-id=29 op=LOAD Dec 16 02:04:52.130332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:04:52.155457 systemd-udevd[1662]: Using default interface naming scheme 'v257'. Dec 16 02:04:52.326781 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 02:04:52.377319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:04:52.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:52.382000 audit: BPF prog-id=30 op=LOAD Dec 16 02:04:52.388336 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:04:52.448181 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 02:04:52.491827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 02:04:52.513396 systemd-networkd[1667]: lo: Link UP Dec 16 02:04:52.513404 systemd-networkd[1667]: lo: Gained carrier Dec 16 02:04:52.514419 systemd-networkd[1667]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:04:52.514422 systemd-networkd[1667]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:04:52.514682 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:04:52.520808 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 02:04:52.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:52.522627 systemd[1]: Reached target network.target - Network. Dec 16 02:04:52.530528 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 02:04:52.539815 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 02:04:52.554797 kernel: loop4: detected capacity change from 0 to 27544 Dec 16 02:04:52.554855 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 02:04:52.601805 kernel: hv_vmbus: registering driver hv_balloon Dec 16 02:04:52.601898 kernel: mlx5_core b76e:00:02.0 enP46958s1: Link up Dec 16 02:04:52.617869 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 02:04:52.617929 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 02:04:52.617948 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 02:04:52.625692 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 02:04:52.625768 kernel: hv_netvsc 002248bb-5770-0022-48bb-5770002248bb eth0: Data path switched to VF: enP46958s1 Dec 16 02:04:52.635126 kernel: Console: switching to colour dummy device 80x25 Dec 16 02:04:52.635206 systemd-networkd[1667]: enP46958s1: Link UP Dec 16 02:04:52.635523 systemd-networkd[1667]: eth0: Link UP Dec 16 02:04:52.635533 systemd-networkd[1667]: eth0: Gained carrier Dec 16 02:04:52.635548 systemd-networkd[1667]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:04:52.636815 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 02:04:52.641212 systemd-networkd[1667]: enP46958s1: Gained carrier Dec 16 02:04:52.649843 systemd-networkd[1667]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 02:04:52.660016 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 02:04:52.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:52.680905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:04:52.697002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:04:52.697282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:04:52.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:52.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:52.706914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:04:52.719814 kernel: MACsec IEEE 802.1AE Dec 16 02:04:52.777908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 02:04:52.784816 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 02:04:52.905184 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 02:04:52.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.006825 kernel: loop5: detected capacity change from 0 to 211168 Dec 16 02:04:53.022807 kernel: loop6: detected capacity change from 0 to 45344 Dec 16 02:04:53.036804 kernel: loop7: detected capacity change from 0 to 100192 Dec 16 02:04:53.049817 kernel: loop1: detected capacity change from 0 to 27544 Dec 16 02:04:53.059930 (sd-merge)[1794]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 02:04:53.063536 (sd-merge)[1794]: Merged extensions into '/usr'. Dec 16 02:04:53.066710 systemd[1]: Reload requested from client PID 1620 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 02:04:53.066826 systemd[1]: Reloading... Dec 16 02:04:53.119812 zram_generator::config[1826]: No configuration found. Dec 16 02:04:53.307014 systemd[1]: Reloading finished in 239 ms. Dec 16 02:04:53.333056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 02:04:53.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.338861 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:04:53.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.351690 systemd[1]: Starting ensure-sysext.service... Dec 16 02:04:53.358911 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:04:53.363000 audit: BPF prog-id=31 op=LOAD Dec 16 02:04:53.363000 audit: BPF prog-id=30 op=UNLOAD Dec 16 02:04:53.364000 audit: BPF prog-id=32 op=LOAD Dec 16 02:04:53.364000 audit: BPF prog-id=33 op=LOAD Dec 16 02:04:53.364000 audit: BPF prog-id=28 op=UNLOAD Dec 16 02:04:53.364000 audit: BPF prog-id=29 op=UNLOAD Dec 16 02:04:53.365000 audit: BPF prog-id=34 op=LOAD Dec 16 02:04:53.365000 audit: BPF prog-id=15 op=UNLOAD Dec 16 02:04:53.365000 audit: BPF prog-id=35 op=LOAD Dec 16 02:04:53.365000 audit: BPF prog-id=36 op=LOAD Dec 16 02:04:53.365000 audit: BPF prog-id=16 op=UNLOAD Dec 16 02:04:53.365000 audit: BPF prog-id=17 op=UNLOAD Dec 16 02:04:53.365000 audit: BPF prog-id=37 op=LOAD Dec 16 02:04:53.365000 audit: BPF prog-id=18 op=UNLOAD Dec 16 02:04:53.366000 audit: BPF prog-id=38 op=LOAD Dec 16 02:04:53.366000 audit: BPF prog-id=39 op=LOAD Dec 16 02:04:53.366000 audit: BPF prog-id=19 op=UNLOAD Dec 16 02:04:53.366000 audit: BPF prog-id=20 op=UNLOAD Dec 16 02:04:53.366000 audit: BPF prog-id=40 op=LOAD Dec 16 02:04:53.366000 audit: BPF prog-id=22 op=UNLOAD Dec 16 02:04:53.366000 audit: BPF prog-id=41 op=LOAD Dec 16 02:04:53.366000 audit: BPF prog-id=42 op=LOAD Dec 16 02:04:53.366000 audit: BPF prog-id=23 op=UNLOAD Dec 16 02:04:53.366000 audit: BPF prog-id=24 op=UNLOAD Dec 16 02:04:53.367000 audit: BPF prog-id=43 op=LOAD Dec 16 02:04:53.367000 audit: BPF prog-id=21 op=UNLOAD Dec 16 02:04:53.368000 audit: BPF prog-id=44 op=LOAD Dec 16 02:04:53.368000 audit: BPF prog-id=25 op=UNLOAD Dec 16 02:04:53.368000 audit: BPF prog-id=45 op=LOAD Dec 16 02:04:53.368000 audit: BPF prog-id=46 op=LOAD Dec 16 02:04:53.368000 audit: BPF prog-id=26 op=UNLOAD Dec 16 02:04:53.368000 audit: BPF prog-id=27 op=UNLOAD Dec 16 02:04:53.372340 systemd-tmpfiles[1886]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 02:04:53.372361 systemd-tmpfiles[1886]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 02:04:53.372818 systemd-tmpfiles[1886]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 02:04:53.373453 systemd-tmpfiles[1886]: ACLs are not supported, ignoring. Dec 16 02:04:53.373489 systemd-tmpfiles[1886]: ACLs are not supported, ignoring. Dec 16 02:04:53.373687 systemd[1]: Reload requested from client PID 1885 ('systemctl') (unit ensure-sysext.service)... Dec 16 02:04:53.373698 systemd[1]: Reloading... Dec 16 02:04:53.392140 systemd-tmpfiles[1886]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:04:53.392149 systemd-tmpfiles[1886]: Skipping /boot Dec 16 02:04:53.397738 systemd-tmpfiles[1886]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:04:53.397751 systemd-tmpfiles[1886]: Skipping /boot Dec 16 02:04:53.436802 zram_generator::config[1923]: No configuration found. Dec 16 02:04:53.588901 systemd[1]: Reloading finished in 214 ms. Dec 16 02:04:53.606000 audit: BPF prog-id=47 op=LOAD Dec 16 02:04:53.607000 audit: BPF prog-id=34 op=UNLOAD Dec 16 02:04:53.607000 audit: BPF prog-id=48 op=LOAD Dec 16 02:04:53.607000 audit: BPF prog-id=49 op=LOAD Dec 16 02:04:53.607000 audit: BPF prog-id=35 op=UNLOAD Dec 16 02:04:53.607000 audit: BPF prog-id=36 op=UNLOAD Dec 16 02:04:53.607000 audit: BPF prog-id=50 op=LOAD Dec 16 02:04:53.607000 audit: BPF prog-id=44 op=UNLOAD Dec 16 02:04:53.607000 audit: BPF prog-id=51 op=LOAD Dec 16 02:04:53.607000 audit: BPF prog-id=52 op=LOAD Dec 16 02:04:53.607000 audit: BPF prog-id=45 op=UNLOAD Dec 16 02:04:53.607000 audit: BPF prog-id=46 op=UNLOAD Dec 16 02:04:53.608000 audit: BPF prog-id=53 op=LOAD Dec 16 02:04:53.608000 audit: BPF prog-id=43 op=UNLOAD Dec 16 02:04:53.608000 audit: BPF prog-id=54 op=LOAD Dec 16 02:04:53.608000 audit: BPF prog-id=55 op=LOAD Dec 16 02:04:53.608000 audit: BPF prog-id=32 op=UNLOAD Dec 16 02:04:53.608000 audit: BPF prog-id=33 op=UNLOAD Dec 16 02:04:53.609000 audit: BPF prog-id=56 op=LOAD Dec 16 02:04:53.609000 audit: BPF prog-id=40 op=UNLOAD Dec 16 02:04:53.609000 audit: BPF prog-id=57 op=LOAD Dec 16 02:04:53.609000 audit: BPF prog-id=58 op=LOAD Dec 16 02:04:53.609000 audit: BPF prog-id=41 op=UNLOAD Dec 16 02:04:53.609000 audit: BPF prog-id=42 op=UNLOAD Dec 16 02:04:53.609000 audit: BPF prog-id=59 op=LOAD Dec 16 02:04:53.609000 audit: BPF prog-id=37 op=UNLOAD Dec 16 02:04:53.609000 audit: BPF prog-id=60 op=LOAD Dec 16 02:04:53.609000 audit: BPF prog-id=61 op=LOAD Dec 16 02:04:53.609000 audit: BPF prog-id=38 op=UNLOAD Dec 16 02:04:53.609000 audit: BPF prog-id=39 op=UNLOAD Dec 16 02:04:53.610000 audit: BPF prog-id=62 op=LOAD Dec 16 02:04:53.610000 audit: BPF prog-id=31 op=UNLOAD Dec 16 02:04:53.622859 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:04:53.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.637303 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:04:53.647519 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 02:04:53.653341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:04:53.654803 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:04:53.661471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:04:53.669979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:04:53.674887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:04:53.675172 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:04:53.676649 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 02:04:53.681424 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:04:53.688542 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 02:04:53.697036 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 02:04:53.704194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:04:53.705824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:04:53.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.712718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:04:53.712890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:04:53.712000 audit[1990]: SYSTEM_BOOT pid=1990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.722069 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:04:53.722585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:04:53.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.733817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:04:53.734942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:04:53.742104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:04:53.751014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:04:53.755674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:04:53.755981 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:04:53.756211 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:04:53.757540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:04:53.757805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:04:53.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.763672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:04:53.763921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:04:53.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.769902 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:04:53.770203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:04:53.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.777610 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 02:04:53.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.787712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:04:53.788727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:04:53.800001 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:04:53.806452 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:04:53.819616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:04:53.824484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:04:53.824617 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:04:53.824686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:04:53.824820 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 02:04:53.830523 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 02:04:53.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.836117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:04:53.836264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:04:53.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.841986 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:04:53.842122 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:04:53.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.846971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:04:53.847107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:04:53.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.852537 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:04:53.852664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:04:53.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.861431 systemd[1]: Finished ensure-sysext.service. Dec 16 02:04:53.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:04:53.866970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:04:53.867027 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:04:54.037000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 02:04:54.037000 audit[2024]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc110910 a2=420 a3=0 items=0 ppid=1977 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:04:54.037000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:04:54.038638 augenrules[2024]: No rules Dec 16 02:04:54.039953 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:04:54.040226 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:04:54.157973 systemd-networkd[1667]: eth0: Gained IPv6LL Dec 16 02:04:54.160270 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 02:04:54.165557 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 02:04:54.541870 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 02:04:54.547813 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 02:04:59.607806 ldconfig[1988]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 02:04:59.637733 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 02:04:59.644937 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 02:04:59.660065 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 02:04:59.664948 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:04:59.669318 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 02:04:59.674755 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 02:04:59.680334 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 02:04:59.684779 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 02:04:59.690966 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 02:04:59.696496 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 02:04:59.701453 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 02:04:59.706781 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 02:04:59.706819 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:04:59.710680 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:04:59.715821 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 02:04:59.721732 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 02:04:59.727267 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 02:04:59.732898 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 02:04:59.738021 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 02:04:59.743798 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 02:04:59.748210 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 02:04:59.753547 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 02:04:59.757840 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:04:59.761515 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:04:59.765519 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:04:59.765543 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:04:59.767657 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 02:04:59.779619 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 02:04:59.786943 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 02:04:59.794566 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 02:04:59.800422 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 02:04:59.807263 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 02:04:59.815304 chronyd[2037]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 02:04:59.816743 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 02:04:59.820662 jq[2045]: false Dec 16 02:04:59.818760 chronyd[2037]: Timezone right/UTC failed leap second check, ignoring Dec 16 02:04:59.820664 chronyd[2037]: Loaded seccomp filter (level 2) Dec 16 02:04:59.821241 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 02:04:59.824242 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 02:04:59.829092 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 02:04:59.829887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:04:59.835750 KVP[2047]: KVP starting; pid is:2047 Dec 16 02:04:59.836911 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 02:04:59.845348 KVP[2047]: KVP LIC Version: 3.1 Dec 16 02:04:59.845543 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 02:04:59.845824 kernel: hv_utils: KVP IC version 4.0 Dec 16 02:04:59.850840 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 02:04:59.856915 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 02:04:59.862877 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 02:04:59.870843 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 02:04:59.876849 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 02:04:59.877194 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 02:04:59.877602 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 02:04:59.882622 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 02:04:59.887574 extend-filesystems[2046]: Found /dev/sda6 Dec 16 02:04:59.889900 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 02:04:59.903074 jq[2069]: true Dec 16 02:04:59.903983 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 02:04:59.909810 extend-filesystems[2046]: Found /dev/sda9 Dec 16 02:04:59.923058 extend-filesystems[2046]: Checking size of /dev/sda9 Dec 16 02:04:59.912003 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 02:04:59.912215 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 02:04:59.916710 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 02:04:59.917761 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 02:04:59.926180 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 02:04:59.934769 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 02:04:59.934981 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 02:04:59.956836 update_engine[2065]: I20251216 02:04:59.955613 2065 main.cc:92] Flatcar Update Engine starting Dec 16 02:04:59.967494 extend-filesystems[2046]: Resized partition /dev/sda9 Dec 16 02:04:59.973472 jq[2087]: true Dec 16 02:04:59.987985 extend-filesystems[2102]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 02:05:00.010094 systemd-logind[2060]: New seat seat0. Dec 16 02:05:00.021650 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 02:05:00.021684 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Dec 16 02:05:00.012691 systemd-logind[2060]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 02:05:00.012919 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 02:05:00.045807 tar[2086]: linux-arm64/LICENSE Dec 16 02:05:00.063002 tar[2086]: linux-arm64/helm Dec 16 02:05:00.086828 extend-filesystems[2102]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 02:05:00.086828 extend-filesystems[2102]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 02:05:00.086828 extend-filesystems[2102]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Dec 16 02:05:00.173157 bash[2121]: Updated "/home/core/.ssh/authorized_keys" Dec 16 02:05:00.173266 update_engine[2065]: I20251216 02:05:00.144781 2065 update_check_scheduler.cc:74] Next update check in 3m58s Dec 16 02:05:00.088208 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 02:05:00.138429 dbus-daemon[2040]: [system] SELinux support is enabled Dec 16 02:05:00.173511 extend-filesystems[2046]: Resized filesystem in /dev/sda9 Dec 16 02:05:00.088506 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 02:05:00.202382 dbus-daemon[2040]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 02:05:00.122707 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 02:05:00.152494 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 02:05:00.207402 systemd[1]: Started update-engine.service - Update Engine. Dec 16 02:05:00.215109 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 02:05:00.215252 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 02:05:00.215336 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 02:05:00.222066 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 02:05:00.222146 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 02:05:00.226931 coreos-metadata[2039]: Dec 16 02:05:00.214 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 02:05:00.234901 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 02:05:00.239904 coreos-metadata[2039]: Dec 16 02:05:00.239 INFO Fetch successful Dec 16 02:05:00.239999 coreos-metadata[2039]: Dec 16 02:05:00.239 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 02:05:00.253126 coreos-metadata[2039]: Dec 16 02:05:00.253 INFO Fetch successful Dec 16 02:05:00.253126 coreos-metadata[2039]: Dec 16 02:05:00.253 INFO Fetching http://168.63.129.16/machine/7761c054-b300-47e9-8b71-63dcf21dbe36/29e55ebe%2Dbd10%2D487f%2D921a%2D2ca80f76af70.%5Fci%2D4547.0.0%2Da%2Dde7f477aa9?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 02:05:00.258679 coreos-metadata[2039]: Dec 16 02:05:00.258 INFO Fetch successful Dec 16 02:05:00.258679 coreos-metadata[2039]: Dec 16 02:05:00.258 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 02:05:00.273581 coreos-metadata[2039]: Dec 16 02:05:00.271 INFO Fetch successful Dec 16 02:05:00.300204 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 02:05:00.312147 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 02:05:00.443496 sshd_keygen[2066]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 02:05:00.461474 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 02:05:00.469963 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 02:05:00.479364 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 02:05:00.505901 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 02:05:00.506137 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 02:05:00.514206 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 02:05:00.530982 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 02:05:00.547424 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 02:05:00.555141 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 02:05:00.561642 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 02:05:00.564689 locksmithd[2187]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 02:05:00.570191 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 02:05:00.583056 tar[2086]: linux-arm64/README.md Dec 16 02:05:00.598941 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 02:05:00.610358 containerd[2088]: time="2025-12-16T02:05:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 02:05:00.611196 containerd[2088]: time="2025-12-16T02:05:00.611161784Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 02:05:00.621175 containerd[2088]: time="2025-12-16T02:05:00.621134592Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.432µs" Dec 16 02:05:00.621639 containerd[2088]: time="2025-12-16T02:05:00.621363320Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 02:05:00.621741 containerd[2088]: time="2025-12-16T02:05:00.621723968Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 02:05:00.621819 containerd[2088]: time="2025-12-16T02:05:00.621776368Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 02:05:00.622023 containerd[2088]: time="2025-12-16T02:05:00.622004176Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 02:05:00.622094 containerd[2088]: time="2025-12-16T02:05:00.622081432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:05:00.622198 containerd[2088]: time="2025-12-16T02:05:00.622181840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:05:00.622252 containerd[2088]: time="2025-12-16T02:05:00.622239744Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:05:00.622489 containerd[2088]: time="2025-12-16T02:05:00.622466048Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:05:00.622541 containerd[2088]: time="2025-12-16T02:05:00.622531120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:05:00.622590 containerd[2088]: time="2025-12-16T02:05:00.622576736Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:05:00.622809 containerd[2088]: time="2025-12-16T02:05:00.622613216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:05:00.622809 containerd[2088]: time="2025-12-16T02:05:00.622765296Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:05:00.622809 containerd[2088]: time="2025-12-16T02:05:00.622776264Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 02:05:00.622981 containerd[2088]: time="2025-12-16T02:05:00.622963184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 02:05:00.623198 containerd[2088]: time="2025-12-16T02:05:00.623178672Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:05:00.623273 containerd[2088]: time="2025-12-16T02:05:00.623259856Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:05:00.623314 containerd[2088]: time="2025-12-16T02:05:00.623302344Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 02:05:00.623377 containerd[2088]: time="2025-12-16T02:05:00.623367152Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 02:05:00.623601 containerd[2088]: time="2025-12-16T02:05:00.623582848Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 02:05:00.623721 containerd[2088]: time="2025-12-16T02:05:00.623707760Z" level=info msg="metadata content store policy set" policy=shared Dec 16 02:05:00.641642 containerd[2088]: time="2025-12-16T02:05:00.641610896Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 02:05:00.641802 containerd[2088]: time="2025-12-16T02:05:00.641745208Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:05:00.642053 containerd[2088]: time="2025-12-16T02:05:00.642027912Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:05:00.642134 containerd[2088]: time="2025-12-16T02:05:00.642112520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 02:05:00.642186 containerd[2088]: time="2025-12-16T02:05:00.642173872Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 02:05:00.642235 containerd[2088]: time="2025-12-16T02:05:00.642225024Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 02:05:00.642275 containerd[2088]: time="2025-12-16T02:05:00.642264208Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 02:05:00.642320 containerd[2088]: time="2025-12-16T02:05:00.642309688Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 02:05:00.642385 containerd[2088]: time="2025-12-16T02:05:00.642364384Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 02:05:00.642430 containerd[2088]: time="2025-12-16T02:05:00.642419256Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 02:05:00.642474 containerd[2088]: time="2025-12-16T02:05:00.642463136Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 02:05:00.642517 containerd[2088]: time="2025-12-16T02:05:00.642507064Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 02:05:00.642555 containerd[2088]: time="2025-12-16T02:05:00.642546552Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 02:05:00.642601 containerd[2088]: time="2025-12-16T02:05:00.642592056Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 02:05:00.642774 containerd[2088]: time="2025-12-16T02:05:00.642760960Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 02:05:00.642920 containerd[2088]: time="2025-12-16T02:05:00.642821008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 02:05:00.642920 containerd[2088]: time="2025-12-16T02:05:00.642837240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 02:05:00.642920 containerd[2088]: time="2025-12-16T02:05:00.642845856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 02:05:00.642920 containerd[2088]: time="2025-12-16T02:05:00.642853320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 02:05:00.642920 containerd[2088]: time="2025-12-16T02:05:00.642859472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 02:05:00.643008 containerd[2088]: time="2025-12-16T02:05:00.642994496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 02:05:00.643051 containerd[2088]: time="2025-12-16T02:05:00.643040120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 02:05:00.643097 containerd[2088]: time="2025-12-16T02:05:00.643087488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 02:05:00.643137 containerd[2088]: time="2025-12-16T02:05:00.643126688Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 02:05:00.643240 containerd[2088]: time="2025-12-16T02:05:00.643172888Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 02:05:00.643240 containerd[2088]: time="2025-12-16T02:05:00.643201936Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 02:05:00.643312 containerd[2088]: time="2025-12-16T02:05:00.643299432Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 02:05:00.643354 containerd[2088]: time="2025-12-16T02:05:00.643344472Z" level=info msg="Start snapshots syncer" Dec 16 02:05:00.643416 containerd[2088]: time="2025-12-16T02:05:00.643405968Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 02:05:00.643768 containerd[2088]: time="2025-12-16T02:05:00.643729864Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 02:05:00.643966 containerd[2088]: time="2025-12-16T02:05:00.643909680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 02:05:00.644040 containerd[2088]: time="2025-12-16T02:05:00.644028408Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 02:05:00.644257 containerd[2088]: time="2025-12-16T02:05:00.644212160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 02:05:00.644257 containerd[2088]: time="2025-12-16T02:05:00.644236160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 02:05:00.644257 containerd[2088]: time="2025-12-16T02:05:00.644244744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 02:05:00.644399 containerd[2088]: time="2025-12-16T02:05:00.644327432Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 02:05:00.644399 containerd[2088]: time="2025-12-16T02:05:00.644347336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 02:05:00.644399 containerd[2088]: time="2025-12-16T02:05:00.644356104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 02:05:00.644399 containerd[2088]: time="2025-12-16T02:05:00.644363408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 02:05:00.644399 containerd[2088]: time="2025-12-16T02:05:00.644370040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 02:05:00.644399 containerd[2088]: time="2025-12-16T02:05:00.644376976Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 02:05:00.644536 containerd[2088]: time="2025-12-16T02:05:00.644524248Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:05:00.644650 containerd[2088]: time="2025-12-16T02:05:00.644631680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:05:00.644749 containerd[2088]: time="2025-12-16T02:05:00.644733736Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:05:00.644819 containerd[2088]: time="2025-12-16T02:05:00.644797616Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:05:00.644861 containerd[2088]: time="2025-12-16T02:05:00.644851080Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 02:05:00.644914 containerd[2088]: time="2025-12-16T02:05:00.644902240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 02:05:00.644952 containerd[2088]: time="2025-12-16T02:05:00.644943320Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 02:05:00.644998 containerd[2088]: time="2025-12-16T02:05:00.644991352Z" level=info msg="runtime interface created" Dec 16 02:05:00.645055 containerd[2088]: time="2025-12-16T02:05:00.645020272Z" level=info msg="created NRI interface" Dec 16 02:05:00.645055 containerd[2088]: time="2025-12-16T02:05:00.645028816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 02:05:00.645055 containerd[2088]: time="2025-12-16T02:05:00.645039880Z" level=info msg="Connect containerd service" Dec 16 02:05:00.645146 containerd[2088]: time="2025-12-16T02:05:00.645135032Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 02:05:00.645913 containerd[2088]: time="2025-12-16T02:05:00.645893512Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 02:05:00.877715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:00.890048 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:05:01.172074 containerd[2088]: time="2025-12-16T02:05:01.171947504Z" level=info msg="Start subscribing containerd event" Dec 16 02:05:01.172378 containerd[2088]: time="2025-12-16T02:05:01.172356584Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 02:05:01.172520 containerd[2088]: time="2025-12-16T02:05:01.172436240Z" level=info msg="Start recovering state" Dec 16 02:05:01.172606 containerd[2088]: time="2025-12-16T02:05:01.172592904Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 02:05:01.172716 containerd[2088]: time="2025-12-16T02:05:01.172593568Z" level=info msg="Start event monitor" Dec 16 02:05:01.172778 containerd[2088]: time="2025-12-16T02:05:01.172767344Z" level=info msg="Start cni network conf syncer for default" Dec 16 02:05:01.172897 containerd[2088]: time="2025-12-16T02:05:01.172801288Z" level=info msg="Start streaming server" Dec 16 02:05:01.172897 containerd[2088]: time="2025-12-16T02:05:01.172814840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 02:05:01.172897 containerd[2088]: time="2025-12-16T02:05:01.172823312Z" level=info msg="runtime interface starting up..." Dec 16 02:05:01.172897 containerd[2088]: time="2025-12-16T02:05:01.172827408Z" level=info msg="starting plugins..." Dec 16 02:05:01.172897 containerd[2088]: time="2025-12-16T02:05:01.172844600Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 02:05:01.177957 containerd[2088]: time="2025-12-16T02:05:01.173104904Z" level=info msg="containerd successfully booted in 0.563040s" Dec 16 02:05:01.173294 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 02:05:01.180749 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 02:05:01.185669 systemd[1]: Startup finished in 3.026s (kernel) + 14.078s (initrd) + 14.210s (userspace) = 31.314s. Dec 16 02:05:01.248004 kubelet[2250]: E1216 02:05:01.247953 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:05:01.250222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:05:01.250411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:05:01.251028 systemd[1]: kubelet.service: Consumed 556ms CPU time, 259M memory peak. Dec 16 02:05:01.844041 login[2229]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:01.844041 login[2230]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:01.850061 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 02:05:01.850859 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 02:05:01.854976 systemd-logind[2060]: New session 1 of user core. Dec 16 02:05:01.858178 systemd-logind[2060]: New session 2 of user core. Dec 16 02:05:01.882847 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 02:05:01.885065 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 02:05:01.895666 (systemd)[2269]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:01.897665 systemd-logind[2060]: New session 3 of user core. Dec 16 02:05:02.036018 systemd[2269]: Queued start job for default target default.target. Dec 16 02:05:02.047552 systemd[2269]: Created slice app.slice - User Application Slice. Dec 16 02:05:02.047749 systemd[2269]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 02:05:02.047842 systemd[2269]: Reached target paths.target - Paths. Dec 16 02:05:02.047959 systemd[2269]: Reached target timers.target - Timers. Dec 16 02:05:02.048972 systemd[2269]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 02:05:02.051430 systemd[2269]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 02:05:02.057979 systemd[2269]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 02:05:02.058107 systemd[2269]: Reached target sockets.target - Sockets. Dec 16 02:05:02.070849 systemd[2269]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 02:05:02.070938 systemd[2269]: Reached target basic.target - Basic System. Dec 16 02:05:02.070980 systemd[2269]: Reached target default.target - Main User Target. Dec 16 02:05:02.071000 systemd[2269]: Startup finished in 169ms. Dec 16 02:05:02.071254 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 02:05:02.081958 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 02:05:02.082521 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 02:05:02.249522 waagent[2225]: 2025-12-16T02:05:02.249381Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 02:05:02.257090 waagent[2225]: 2025-12-16T02:05:02.253957Z INFO Daemon Daemon OS: flatcar 4547.0.0 Dec 16 02:05:02.257365 waagent[2225]: 2025-12-16T02:05:02.257328Z INFO Daemon Daemon Python: 3.11.13 Dec 16 02:05:02.260760 waagent[2225]: 2025-12-16T02:05:02.260649Z INFO Daemon Daemon Run daemon Dec 16 02:05:02.264018 waagent[2225]: 2025-12-16T02:05:02.263984Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.0.0' Dec 16 02:05:02.270361 waagent[2225]: 2025-12-16T02:05:02.270324Z INFO Daemon Daemon Using waagent for provisioning Dec 16 02:05:02.274217 waagent[2225]: 2025-12-16T02:05:02.274182Z INFO Daemon Daemon Activate resource disk Dec 16 02:05:02.277640 waagent[2225]: 2025-12-16T02:05:02.277608Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 02:05:02.285889 waagent[2225]: 2025-12-16T02:05:02.285858Z INFO Daemon Daemon Found device: None Dec 16 02:05:02.289125 waagent[2225]: 2025-12-16T02:05:02.289093Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 02:05:02.295265 waagent[2225]: 2025-12-16T02:05:02.295231Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 02:05:02.303727 waagent[2225]: 2025-12-16T02:05:02.303688Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 02:05:02.307984 waagent[2225]: 2025-12-16T02:05:02.307947Z INFO Daemon Daemon Running default provisioning handler Dec 16 02:05:02.317335 waagent[2225]: 2025-12-16T02:05:02.317299Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 02:05:02.328055 waagent[2225]: 2025-12-16T02:05:02.328015Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 02:05:02.335128 waagent[2225]: 2025-12-16T02:05:02.335085Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 02:05:02.338771 waagent[2225]: 2025-12-16T02:05:02.338738Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 02:05:02.406515 waagent[2225]: 2025-12-16T02:05:02.406464Z INFO Daemon Daemon Successfully mounted dvd Dec 16 02:05:02.456384 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 02:05:02.458812 waagent[2225]: 2025-12-16T02:05:02.458232Z INFO Daemon Daemon Detect protocol endpoint Dec 16 02:05:02.461837 waagent[2225]: 2025-12-16T02:05:02.461806Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 02:05:02.465868 waagent[2225]: 2025-12-16T02:05:02.465836Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 02:05:02.470958 waagent[2225]: 2025-12-16T02:05:02.470931Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 02:05:02.474773 waagent[2225]: 2025-12-16T02:05:02.474745Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 02:05:02.478417 waagent[2225]: 2025-12-16T02:05:02.478391Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 02:05:02.490373 waagent[2225]: 2025-12-16T02:05:02.490337Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 02:05:02.495077 waagent[2225]: 2025-12-16T02:05:02.495055Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 02:05:02.499249 waagent[2225]: 2025-12-16T02:05:02.499225Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 02:05:02.599820 waagent[2225]: 2025-12-16T02:05:02.599662Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 02:05:02.604266 waagent[2225]: 2025-12-16T02:05:02.604228Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 02:05:02.611738 waagent[2225]: 2025-12-16T02:05:02.611700Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 02:05:02.630069 waagent[2225]: 2025-12-16T02:05:02.630039Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 02:05:02.634160 waagent[2225]: 2025-12-16T02:05:02.634127Z INFO Daemon Dec 16 02:05:02.636219 waagent[2225]: 2025-12-16T02:05:02.636191Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: af266f1d-1f91-484b-8a03-ec1aa065d97c eTag: 1704937962769290699 source: Fabric] Dec 16 02:05:02.644463 waagent[2225]: 2025-12-16T02:05:02.644430Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 02:05:02.649097 waagent[2225]: 2025-12-16T02:05:02.649066Z INFO Daemon Dec 16 02:05:02.651121 waagent[2225]: 2025-12-16T02:05:02.651089Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 02:05:02.660062 waagent[2225]: 2025-12-16T02:05:02.660033Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 02:05:02.725119 waagent[2225]: 2025-12-16T02:05:02.725058Z INFO Daemon Downloaded certificate {'thumbprint': '16F54FD7E2FCBE6157FD09B3D1F97F4827670AB7', 'hasPrivateKey': True} Dec 16 02:05:02.731859 waagent[2225]: 2025-12-16T02:05:02.731821Z INFO Daemon Fetch goal state completed Dec 16 02:05:02.742534 waagent[2225]: 2025-12-16T02:05:02.742489Z INFO Daemon Daemon Starting provisioning Dec 16 02:05:02.746106 waagent[2225]: 2025-12-16T02:05:02.746078Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 02:05:02.749377 waagent[2225]: 2025-12-16T02:05:02.749354Z INFO Daemon Daemon Set hostname [ci-4547.0.0-a-de7f477aa9] Dec 16 02:05:02.755433 waagent[2225]: 2025-12-16T02:05:02.755395Z INFO Daemon Daemon Publish hostname [ci-4547.0.0-a-de7f477aa9] Dec 16 02:05:02.759856 waagent[2225]: 2025-12-16T02:05:02.759823Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 02:05:02.764254 waagent[2225]: 2025-12-16T02:05:02.764223Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 02:05:02.773937 systemd-networkd[1667]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:05:02.773944 systemd-networkd[1667]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:05:02.774020 systemd-networkd[1667]: eth0: DHCP lease lost Dec 16 02:05:02.791341 waagent[2225]: 2025-12-16T02:05:02.787512Z INFO Daemon Daemon Create user account if not exists Dec 16 02:05:02.791436 waagent[2225]: 2025-12-16T02:05:02.791403Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 02:05:02.795449 waagent[2225]: 2025-12-16T02:05:02.795413Z INFO Daemon Daemon Configure sudoer Dec 16 02:05:02.805676 waagent[2225]: 2025-12-16T02:05:02.805636Z INFO Daemon Daemon Configure sshd Dec 16 02:05:02.808856 systemd-networkd[1667]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 02:05:02.814365 waagent[2225]: 2025-12-16T02:05:02.814322Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 02:05:02.823145 waagent[2225]: 2025-12-16T02:05:02.823110Z INFO Daemon Daemon Deploy ssh public key. Dec 16 02:05:03.945055 waagent[2225]: 2025-12-16T02:05:03.945013Z INFO Daemon Daemon Provisioning complete Dec 16 02:05:03.958865 waagent[2225]: 2025-12-16T02:05:03.958829Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 02:05:03.963921 waagent[2225]: 2025-12-16T02:05:03.963891Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 02:05:03.971754 waagent[2225]: 2025-12-16T02:05:03.971729Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 02:05:04.070826 waagent[2322]: 2025-12-16T02:05:04.070229Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 02:05:04.070826 waagent[2322]: 2025-12-16T02:05:04.070346Z INFO ExtHandler ExtHandler OS: flatcar 4547.0.0 Dec 16 02:05:04.070826 waagent[2322]: 2025-12-16T02:05:04.070384Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 02:05:04.070826 waagent[2322]: 2025-12-16T02:05:04.070417Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 02:05:04.108771 waagent[2322]: 2025-12-16T02:05:04.108728Z INFO ExtHandler ExtHandler Distro: flatcar-4547.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 02:05:04.109041 waagent[2322]: 2025-12-16T02:05:04.109012Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 02:05:04.109178 waagent[2322]: 2025-12-16T02:05:04.109151Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 02:05:04.114738 waagent[2322]: 2025-12-16T02:05:04.114693Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 02:05:04.119822 waagent[2322]: 2025-12-16T02:05:04.119536Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 02:05:04.119943 waagent[2322]: 2025-12-16T02:05:04.119907Z INFO ExtHandler Dec 16 02:05:04.119994 waagent[2322]: 2025-12-16T02:05:04.119976Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 13e817f0-fa9d-4740-b1cf-e595c998d59f eTag: 1704937962769290699 source: Fabric] Dec 16 02:05:04.120210 waagent[2322]: 2025-12-16T02:05:04.120185Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 02:05:04.120595 waagent[2322]: 2025-12-16T02:05:04.120566Z INFO ExtHandler Dec 16 02:05:04.120634 waagent[2322]: 2025-12-16T02:05:04.120618Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 02:05:04.124046 waagent[2322]: 2025-12-16T02:05:04.124019Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 02:05:04.179587 waagent[2322]: 2025-12-16T02:05:04.179533Z INFO ExtHandler Downloaded certificate {'thumbprint': '16F54FD7E2FCBE6157FD09B3D1F97F4827670AB7', 'hasPrivateKey': True} Dec 16 02:05:04.179960 waagent[2322]: 2025-12-16T02:05:04.179924Z INFO ExtHandler Fetch goal state completed Dec 16 02:05:04.191851 waagent[2322]: 2025-12-16T02:05:04.191810Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Dec 16 02:05:04.195117 waagent[2322]: 2025-12-16T02:05:04.195039Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2322 Dec 16 02:05:04.195180 waagent[2322]: 2025-12-16T02:05:04.195157Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 02:05:04.195420 waagent[2322]: 2025-12-16T02:05:04.195392Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 02:05:04.196508 waagent[2322]: 2025-12-16T02:05:04.196474Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 02:05:04.196865 waagent[2322]: 2025-12-16T02:05:04.196834Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 02:05:04.196983 waagent[2322]: 2025-12-16T02:05:04.196959Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 02:05:04.197400 waagent[2322]: 2025-12-16T02:05:04.197371Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 02:05:04.254937 waagent[2322]: 2025-12-16T02:05:04.254899Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 02:05:04.255087 waagent[2322]: 2025-12-16T02:05:04.255058Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 02:05:04.259453 waagent[2322]: 2025-12-16T02:05:04.259429Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 02:05:04.263946 systemd[1]: Reload requested from client PID 2337 ('systemctl') (unit waagent.service)... Dec 16 02:05:04.264137 systemd[1]: Reloading... Dec 16 02:05:04.339834 zram_generator::config[2397]: No configuration found. Dec 16 02:05:04.477796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#98 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 02:05:04.488109 systemd[1]: Reloading finished in 223 ms. Dec 16 02:05:04.499521 waagent[2322]: 2025-12-16T02:05:04.498774Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 02:05:04.499521 waagent[2322]: 2025-12-16T02:05:04.498916Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 02:05:04.799725 waagent[2322]: 2025-12-16T02:05:04.799609Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 02:05:04.800130 waagent[2322]: 2025-12-16T02:05:04.800094Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 02:05:04.800835 waagent[2322]: 2025-12-16T02:05:04.800795Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 02:05:04.801011 waagent[2322]: 2025-12-16T02:05:04.800974Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 02:05:04.801066 waagent[2322]: 2025-12-16T02:05:04.801049Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 02:05:04.801239 waagent[2322]: 2025-12-16T02:05:04.801211Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 02:05:04.801513 waagent[2322]: 2025-12-16T02:05:04.801478Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 02:05:04.801819 waagent[2322]: 2025-12-16T02:05:04.801779Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 02:05:04.801874 waagent[2322]: 2025-12-16T02:05:04.801852Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 02:05:04.801973 waagent[2322]: 2025-12-16T02:05:04.801949Z INFO EnvHandler ExtHandler Configure routes Dec 16 02:05:04.802011 waagent[2322]: 2025-12-16T02:05:04.801996Z INFO EnvHandler ExtHandler Gateway:None Dec 16 02:05:04.802039 waagent[2322]: 2025-12-16T02:05:04.802025Z INFO EnvHandler ExtHandler Routes:None Dec 16 02:05:04.802484 waagent[2322]: 2025-12-16T02:05:04.802453Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 02:05:04.802569 waagent[2322]: 2025-12-16T02:05:04.802528Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 02:05:04.802688 waagent[2322]: 2025-12-16T02:05:04.802659Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 02:05:04.802688 waagent[2322]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 02:05:04.802688 waagent[2322]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 02:05:04.802688 waagent[2322]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 02:05:04.802688 waagent[2322]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 02:05:04.802688 waagent[2322]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 02:05:04.802688 waagent[2322]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 02:05:04.803248 waagent[2322]: 2025-12-16T02:05:04.803134Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 02:05:04.803248 waagent[2322]: 2025-12-16T02:05:04.803214Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 02:05:04.804005 waagent[2322]: 2025-12-16T02:05:04.803963Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 02:05:04.810756 waagent[2322]: 2025-12-16T02:05:04.810728Z INFO ExtHandler ExtHandler Dec 16 02:05:04.810903 waagent[2322]: 2025-12-16T02:05:04.810877Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 96dc78b0-6c13-4d21-aa9a-cfc2e91df42e correlation a6d11750-8a09-4a1d-9152-7a26dec741e8 created: 2025-12-16T02:04:06.373962Z] Dec 16 02:05:04.811252 waagent[2322]: 2025-12-16T02:05:04.811223Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 02:05:04.812310 waagent[2322]: 2025-12-16T02:05:04.812268Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 16 02:05:04.833462 waagent[2322]: 2025-12-16T02:05:04.833418Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 02:05:04.833462 waagent[2322]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 02:05:04.833735 waagent[2322]: 2025-12-16T02:05:04.833697Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 41DCD6AF-7F3C-4D47-81FE-465126EDCE63;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 02:05:04.865528 waagent[2322]: 2025-12-16T02:05:04.865209Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 02:05:04.865528 waagent[2322]: Executing ['ip', '-a', '-o', 'link']: Dec 16 02:05:04.865528 waagent[2322]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 02:05:04.865528 waagent[2322]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:57:70 brd ff:ff:ff:ff:ff:ff\ altname enx002248bb5770 Dec 16 02:05:04.865528 waagent[2322]: 3: enP46958s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:57:70 brd ff:ff:ff:ff:ff:ff\ altname enP46958p0s2 Dec 16 02:05:04.865528 waagent[2322]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 02:05:04.865528 waagent[2322]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 02:05:04.865528 waagent[2322]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 02:05:04.865528 waagent[2322]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 02:05:04.865528 waagent[2322]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 02:05:04.865528 waagent[2322]: 2: eth0 inet6 fe80::222:48ff:febb:5770/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 02:05:04.898415 waagent[2322]: 2025-12-16T02:05:04.898332Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 02:05:04.898415 waagent[2322]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 02:05:04.898415 waagent[2322]: pkts bytes target prot opt in out source destination Dec 16 02:05:04.898415 waagent[2322]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 02:05:04.898415 waagent[2322]: pkts bytes target prot opt in out source destination Dec 16 02:05:04.898415 waagent[2322]: Chain OUTPUT (policy ACCEPT 4 packets, 406 bytes) Dec 16 02:05:04.898415 waagent[2322]: pkts bytes target prot opt in out source destination Dec 16 02:05:04.898415 waagent[2322]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 02:05:04.898415 waagent[2322]: 3 534 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 02:05:04.898415 waagent[2322]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 02:05:04.900984 waagent[2322]: 2025-12-16T02:05:04.900942Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 02:05:04.900984 waagent[2322]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 02:05:04.900984 waagent[2322]: pkts bytes target prot opt in out source destination Dec 16 02:05:04.900984 waagent[2322]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 02:05:04.900984 waagent[2322]: pkts bytes target prot opt in out source destination Dec 16 02:05:04.900984 waagent[2322]: Chain OUTPUT (policy ACCEPT 4 packets, 406 bytes) Dec 16 02:05:04.900984 waagent[2322]: pkts bytes target prot opt in out source destination Dec 16 02:05:04.900984 waagent[2322]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 02:05:04.900984 waagent[2322]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 02:05:04.900984 waagent[2322]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 02:05:04.901165 waagent[2322]: 2025-12-16T02:05:04.901140Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 02:05:11.498605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 02:05:11.499957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:05:11.637110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:11.649157 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:05:11.719263 kubelet[2480]: E1216 02:05:11.719187 2480 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:05:11.721927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:05:11.722043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:05:11.722574 systemd[1]: kubelet.service: Consumed 110ms CPU time, 104.5M memory peak. Dec 16 02:05:21.748743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 02:05:21.750110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:05:22.043667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:22.046426 (kubelet)[2494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:05:22.073957 kubelet[2494]: E1216 02:05:22.073922 2494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:05:22.076299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:05:22.076406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:05:22.076873 systemd[1]: kubelet.service: Consumed 103ms CPU time, 104.9M memory peak. Dec 16 02:05:22.587967 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 02:05:22.589246 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:56158.service - OpenSSH per-connection server daemon (10.200.16.10:56158). Dec 16 02:05:23.300374 sshd[2502]: Accepted publickey for core from 10.200.16.10 port 56158 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:05:23.301404 sshd-session[2502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:23.305155 systemd-logind[2060]: New session 4 of user core. Dec 16 02:05:23.314923 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 02:05:23.615428 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:56164.service - OpenSSH per-connection server daemon (10.200.16.10:56164). Dec 16 02:05:23.629375 chronyd[2037]: Selected source PHC0 Dec 16 02:05:24.002285 sshd[2509]: Accepted publickey for core from 10.200.16.10 port 56164 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:05:24.003383 sshd-session[2509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:24.007210 systemd-logind[2060]: New session 5 of user core. Dec 16 02:05:24.015035 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 02:05:24.215902 sshd[2513]: Connection closed by 10.200.16.10 port 56164 Dec 16 02:05:24.216389 sshd-session[2509]: pam_unix(sshd:session): session closed for user core Dec 16 02:05:24.219346 systemd-logind[2060]: Session 5 logged out. Waiting for processes to exit. Dec 16 02:05:24.220265 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:56164.service: Deactivated successfully. Dec 16 02:05:24.222003 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 02:05:24.224526 systemd-logind[2060]: Removed session 5. Dec 16 02:05:24.301773 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:56176.service - OpenSSH per-connection server daemon (10.200.16.10:56176). Dec 16 02:05:24.694417 sshd[2519]: Accepted publickey for core from 10.200.16.10 port 56176 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:05:24.695565 sshd-session[2519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:24.699441 systemd-logind[2060]: New session 6 of user core. Dec 16 02:05:24.706107 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 02:05:24.906362 sshd[2523]: Connection closed by 10.200.16.10 port 56176 Dec 16 02:05:24.906915 sshd-session[2519]: pam_unix(sshd:session): session closed for user core Dec 16 02:05:24.910503 systemd-logind[2060]: Session 6 logged out. Waiting for processes to exit. Dec 16 02:05:24.910627 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:56176.service: Deactivated successfully. Dec 16 02:05:24.912502 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 02:05:24.914248 systemd-logind[2060]: Removed session 6. Dec 16 02:05:24.997507 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:56182.service - OpenSSH per-connection server daemon (10.200.16.10:56182). Dec 16 02:05:25.416287 sshd[2529]: Accepted publickey for core from 10.200.16.10 port 56182 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:05:25.417323 sshd-session[2529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:25.420964 systemd-logind[2060]: New session 7 of user core. Dec 16 02:05:25.428076 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 02:05:25.649768 sshd[2533]: Connection closed by 10.200.16.10 port 56182 Dec 16 02:05:25.650295 sshd-session[2529]: pam_unix(sshd:session): session closed for user core Dec 16 02:05:25.653853 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:56182.service: Deactivated successfully. Dec 16 02:05:25.655362 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 02:05:25.656049 systemd-logind[2060]: Session 7 logged out. Waiting for processes to exit. Dec 16 02:05:25.657052 systemd-logind[2060]: Removed session 7. Dec 16 02:05:25.740609 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:56190.service - OpenSSH per-connection server daemon (10.200.16.10:56190). Dec 16 02:05:26.160229 sshd[2539]: Accepted publickey for core from 10.200.16.10 port 56190 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:05:26.161411 sshd-session[2539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:26.165276 systemd-logind[2060]: New session 8 of user core. Dec 16 02:05:26.172118 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 02:05:26.480685 sudo[2544]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 02:05:26.480958 sudo[2544]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:05:26.509209 sudo[2544]: pam_unix(sudo:session): session closed for user root Dec 16 02:05:26.585733 sshd[2543]: Connection closed by 10.200.16.10 port 56190 Dec 16 02:05:26.586419 sshd-session[2539]: pam_unix(sshd:session): session closed for user core Dec 16 02:05:26.590346 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:56190.service: Deactivated successfully. Dec 16 02:05:26.591856 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 02:05:26.592522 systemd-logind[2060]: Session 8 logged out. Waiting for processes to exit. Dec 16 02:05:26.593687 systemd-logind[2060]: Removed session 8. Dec 16 02:05:26.668716 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:56192.service - OpenSSH per-connection server daemon (10.200.16.10:56192). Dec 16 02:05:27.062359 sshd[2551]: Accepted publickey for core from 10.200.16.10 port 56192 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:05:27.063544 sshd-session[2551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:27.067467 systemd-logind[2060]: New session 9 of user core. Dec 16 02:05:27.071930 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 02:05:27.210183 sudo[2557]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 02:05:27.210398 sudo[2557]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:05:27.216003 sudo[2557]: pam_unix(sudo:session): session closed for user root Dec 16 02:05:27.221224 sudo[2556]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 02:05:27.221720 sudo[2556]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:05:27.227237 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:05:27.259289 kernel: kauditd_printk_skb: 158 callbacks suppressed Dec 16 02:05:27.259424 kernel: audit: type=1305 audit(1765850727.255:260): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:05:27.255000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:05:27.259530 augenrules[2581]: No rules Dec 16 02:05:27.267081 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:05:27.267299 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:05:27.255000 audit[2581]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff547d020 a2=420 a3=0 items=0 ppid=2562 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:27.269994 sudo[2556]: pam_unix(sudo:session): session closed for user root Dec 16 02:05:27.283256 kernel: audit: type=1300 audit(1765850727.255:260): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff547d020 a2=420 a3=0 items=0 ppid=2562 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:27.255000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:05:27.290148 kernel: audit: type=1327 audit(1765850727.255:260): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:05:27.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.301548 kernel: audit: type=1130 audit(1765850727.266:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.301622 kernel: audit: type=1131 audit(1765850727.266:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.266000 audit[2556]: USER_END pid=2556 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.325906 kernel: audit: type=1106 audit(1765850727.266:263): pid=2556 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.326019 kernel: audit: type=1104 audit(1765850727.266:264): pid=2556 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.266000 audit[2556]: CRED_DISP pid=2556 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.341933 sshd[2555]: Connection closed by 10.200.16.10 port 56192 Dec 16 02:05:27.341844 sshd-session[2551]: pam_unix(sshd:session): session closed for user core Dec 16 02:05:27.342000 audit[2551]: USER_END pid=2551 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:05:27.346921 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:56192.service: Deactivated successfully. Dec 16 02:05:27.349267 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 02:05:27.350706 systemd-logind[2060]: Session 9 logged out. Waiting for processes to exit. Dec 16 02:05:27.352525 systemd-logind[2060]: Removed session 9. Dec 16 02:05:27.343000 audit[2551]: CRED_DISP pid=2551 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:05:27.373094 kernel: audit: type=1106 audit(1765850727.342:265): pid=2551 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:05:27.373135 kernel: audit: type=1104 audit(1765850727.343:266): pid=2551 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:05:27.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.37:22-10.200.16.10:56192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.385818 kernel: audit: type=1131 audit(1765850727.346:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.37:22-10.200.16.10:56192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.433735 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:56198.service - OpenSSH per-connection server daemon (10.200.16.10:56198). Dec 16 02:05:27.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:56198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:27.848000 audit[2590]: USER_ACCT pid=2590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:05:27.849891 sshd[2590]: Accepted publickey for core from 10.200.16.10 port 56198 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:05:27.849000 audit[2590]: CRED_ACQ pid=2590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:05:27.849000 audit[2590]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd146f10 a2=3 a3=0 items=0 ppid=1 pid=2590 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:27.849000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:05:27.850985 sshd-session[2590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:05:27.854587 systemd-logind[2060]: New session 10 of user core. Dec 16 02:05:27.866937 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 02:05:27.868000 audit[2590]: USER_START pid=2590 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:05:27.869000 audit[2594]: CRED_ACQ pid=2594 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:05:28.008000 audit[2595]: USER_ACCT pid=2595 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:05:28.009279 sudo[2595]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 02:05:28.008000 audit[2595]: CRED_REFR pid=2595 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:05:28.008000 audit[2595]: USER_START pid=2595 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:05:28.009488 sudo[2595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:05:29.181455 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 02:05:29.192007 (dockerd)[2613]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 02:05:30.129765 dockerd[2613]: time="2025-12-16T02:05:30.129706192Z" level=info msg="Starting up" Dec 16 02:05:30.131210 dockerd[2613]: time="2025-12-16T02:05:30.131039488Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 02:05:30.139494 dockerd[2613]: time="2025-12-16T02:05:30.139456448Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 02:05:30.180306 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1985304115-merged.mount: Deactivated successfully. Dec 16 02:05:30.232551 systemd[1]: var-lib-docker-metacopy\x2dcheck1903737102-merged.mount: Deactivated successfully. Dec 16 02:05:30.254747 dockerd[2613]: time="2025-12-16T02:05:30.254702464Z" level=info msg="Loading containers: start." Dec 16 02:05:30.285825 kernel: Initializing XFRM netlink socket Dec 16 02:05:30.363000 audit[2659]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.363000 audit[2659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd2109b30 a2=0 a3=0 items=0 ppid=2613 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:05:30.365000 audit[2661]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.365000 audit[2661]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffdeec61b0 a2=0 a3=0 items=0 ppid=2613 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:05:30.366000 audit[2663]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.366000 audit[2663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1ba30b0 a2=0 a3=0 items=0 ppid=2613 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:05:30.368000 audit[2665]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.368000 audit[2665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5f87270 a2=0 a3=0 items=0 ppid=2613 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.368000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:05:30.370000 audit[2667]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.370000 audit[2667]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffccd896a0 a2=0 a3=0 items=0 ppid=2613 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.370000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:05:30.371000 audit[2669]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.371000 audit[2669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdca54740 a2=0 a3=0 items=0 ppid=2613 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.371000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:05:30.373000 audit[2671]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.373000 audit[2671]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffc248100 a2=0 a3=0 items=0 ppid=2613 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.373000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:05:30.374000 audit[2673]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.374000 audit[2673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffff128510 a2=0 a3=0 items=0 ppid=2613 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.374000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:05:30.423000 audit[2676]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.423000 audit[2676]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffdba771f0 a2=0 a3=0 items=0 ppid=2613 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.423000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 02:05:30.424000 audit[2678]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.424000 audit[2678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffea0f9640 a2=0 a3=0 items=0 ppid=2613 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.424000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:05:30.426000 audit[2680]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.426000 audit[2680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe8284690 a2=0 a3=0 items=0 ppid=2613 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:05:30.428000 audit[2682]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.428000 audit[2682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd4020f80 a2=0 a3=0 items=0 ppid=2613 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.428000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:05:30.429000 audit[2684]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.429000 audit[2684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe6567130 a2=0 a3=0 items=0 ppid=2613 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:05:30.487000 audit[2714]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2714 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.487000 audit[2714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc91e0fd0 a2=0 a3=0 items=0 ppid=2613 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.487000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:05:30.489000 audit[2716]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.489000 audit[2716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe3775bc0 a2=0 a3=0 items=0 ppid=2613 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.489000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:05:30.490000 audit[2718]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.490000 audit[2718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc15127c0 a2=0 a3=0 items=0 ppid=2613 pid=2718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.490000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:05:30.492000 audit[2720]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.492000 audit[2720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe592d670 a2=0 a3=0 items=0 ppid=2613 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.492000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:05:30.493000 audit[2722]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.493000 audit[2722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcc6053a0 a2=0 a3=0 items=0 ppid=2613 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.493000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:05:30.495000 audit[2724]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.495000 audit[2724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffc0422c0 a2=0 a3=0 items=0 ppid=2613 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.495000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:05:30.497000 audit[2726]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2726 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.497000 audit[2726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe127b590 a2=0 a3=0 items=0 ppid=2613 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.497000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:05:30.498000 audit[2728]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2728 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.498000 audit[2728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdad2c7b0 a2=0 a3=0 items=0 ppid=2613 pid=2728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.498000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:05:30.500000 audit[2730]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.500000 audit[2730]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffffe4de30 a2=0 a3=0 items=0 ppid=2613 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.500000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 02:05:30.502000 audit[2732]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.502000 audit[2732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd293b390 a2=0 a3=0 items=0 ppid=2613 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.502000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:05:30.504000 audit[2734]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.504000 audit[2734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffde345f40 a2=0 a3=0 items=0 ppid=2613 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.504000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:05:30.505000 audit[2736]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.505000 audit[2736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc7d0ff70 a2=0 a3=0 items=0 ppid=2613 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.505000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:05:30.507000 audit[2738]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.507000 audit[2738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffcb9b4e0 a2=0 a3=0 items=0 ppid=2613 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.507000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:05:30.511000 audit[2743]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.511000 audit[2743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe0c99f70 a2=0 a3=0 items=0 ppid=2613 pid=2743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.511000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:05:30.513000 audit[2745]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2745 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.513000 audit[2745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff46a1ca0 a2=0 a3=0 items=0 ppid=2613 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.513000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:05:30.514000 audit[2747]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.514000 audit[2747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffff7fb4b0 a2=0 a3=0 items=0 ppid=2613 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.514000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:05:30.516000 audit[2749]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2749 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.516000 audit[2749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff8994e70 a2=0 a3=0 items=0 ppid=2613 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:05:30.518000 audit[2751]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.518000 audit[2751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcd69d4c0 a2=0 a3=0 items=0 ppid=2613 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.518000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:05:30.519000 audit[2753]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:30.519000 audit[2753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe2b719d0 a2=0 a3=0 items=0 ppid=2613 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.519000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:05:30.601000 audit[2758]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.601000 audit[2758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffc1e79340 a2=0 a3=0 items=0 ppid=2613 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 02:05:30.603000 audit[2760]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.603000 audit[2760]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe8b040d0 a2=0 a3=0 items=0 ppid=2613 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 02:05:30.610000 audit[2768]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.610000 audit[2768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc53b9610 a2=0 a3=0 items=0 ppid=2613 pid=2768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.610000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 02:05:30.614000 audit[2773]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.614000 audit[2773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcc295080 a2=0 a3=0 items=0 ppid=2613 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 02:05:30.616000 audit[2775]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.616000 audit[2775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffc549f4b0 a2=0 a3=0 items=0 ppid=2613 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 02:05:30.618000 audit[2777]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.618000 audit[2777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd57b4cb0 a2=0 a3=0 items=0 ppid=2613 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.618000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 02:05:30.620000 audit[2779]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.620000 audit[2779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffca178000 a2=0 a3=0 items=0 ppid=2613 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.620000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:05:30.621000 audit[2781]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:30.621000 audit[2781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff20e4f30 a2=0 a3=0 items=0 ppid=2613 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:30.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 02:05:30.623536 systemd-networkd[1667]: docker0: Link UP Dec 16 02:05:30.649907 dockerd[2613]: time="2025-12-16T02:05:30.649862224Z" level=info msg="Loading containers: done." Dec 16 02:05:30.736972 dockerd[2613]: time="2025-12-16T02:05:30.736852392Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 02:05:30.736972 dockerd[2613]: time="2025-12-16T02:05:30.736942936Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 02:05:30.737139 dockerd[2613]: time="2025-12-16T02:05:30.737053304Z" level=info msg="Initializing buildkit" Dec 16 02:05:30.790632 dockerd[2613]: time="2025-12-16T02:05:30.790584112Z" level=info msg="Completed buildkit initialization" Dec 16 02:05:30.795928 dockerd[2613]: time="2025-12-16T02:05:30.795851912Z" level=info msg="Daemon has completed initialization" Dec 16 02:05:30.796650 dockerd[2613]: time="2025-12-16T02:05:30.796053928Z" level=info msg="API listen on /run/docker.sock" Dec 16 02:05:30.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:30.796177 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 02:05:31.700198 containerd[2088]: time="2025-12-16T02:05:31.700150505Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 02:05:32.248435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 02:05:32.249703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:05:32.338909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:32.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:32.342505 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 02:05:32.342551 kernel: audit: type=1130 audit(1765850732.338:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:32.355775 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:05:32.487383 kubelet[2828]: E1216 02:05:32.487303 2828 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:05:32.489575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:05:32.489684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:05:32.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:05:32.491872 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.7M memory peak. Dec 16 02:05:32.504886 kernel: audit: type=1131 audit(1765850732.491:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:05:33.222087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2088934144.mount: Deactivated successfully. Dec 16 02:05:34.490749 containerd[2088]: time="2025-12-16T02:05:34.490088626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:34.497134 containerd[2088]: time="2025-12-16T02:05:34.497100584Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=26467641" Dec 16 02:05:34.501924 containerd[2088]: time="2025-12-16T02:05:34.501901962Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:34.510294 containerd[2088]: time="2025-12-16T02:05:34.510270897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:34.510804 containerd[2088]: time="2025-12-16T02:05:34.510772256Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.810576126s" Dec 16 02:05:34.510904 containerd[2088]: time="2025-12-16T02:05:34.510889636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 02:05:34.512014 containerd[2088]: time="2025-12-16T02:05:34.511990733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 02:05:36.512576 containerd[2088]: time="2025-12-16T02:05:36.512493317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:36.517032 containerd[2088]: time="2025-12-16T02:05:36.517002902Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 16 02:05:36.520655 containerd[2088]: time="2025-12-16T02:05:36.520621844Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:36.533814 containerd[2088]: time="2025-12-16T02:05:36.533455907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:36.533877 containerd[2088]: time="2025-12-16T02:05:36.533840087Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 2.02181912s" Dec 16 02:05:36.533877 containerd[2088]: time="2025-12-16T02:05:36.533870920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 02:05:36.534647 containerd[2088]: time="2025-12-16T02:05:36.534617942Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 02:05:38.275829 containerd[2088]: time="2025-12-16T02:05:38.275567157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:38.278985 containerd[2088]: time="2025-12-16T02:05:38.278959637Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 16 02:05:38.282694 containerd[2088]: time="2025-12-16T02:05:38.282671574Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:38.287187 containerd[2088]: time="2025-12-16T02:05:38.287145286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:38.288951 containerd[2088]: time="2025-12-16T02:05:38.288913740Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.754196971s" Dec 16 02:05:38.289071 containerd[2088]: time="2025-12-16T02:05:38.289056392Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 02:05:38.290223 containerd[2088]: time="2025-12-16T02:05:38.290130313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 02:05:39.244835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666110105.mount: Deactivated successfully. Dec 16 02:05:40.001945 containerd[2088]: time="2025-12-16T02:05:40.001873286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:40.008099 containerd[2088]: time="2025-12-16T02:05:40.007931214Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28254952" Dec 16 02:05:40.015635 containerd[2088]: time="2025-12-16T02:05:40.015606598Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:40.021018 containerd[2088]: time="2025-12-16T02:05:40.020978547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:40.021721 containerd[2088]: time="2025-12-16T02:05:40.021693935Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.731528621s" Dec 16 02:05:40.021767 containerd[2088]: time="2025-12-16T02:05:40.021725280Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 02:05:40.022281 containerd[2088]: time="2025-12-16T02:05:40.022255109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 02:05:40.744818 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 02:05:41.182210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143360409.mount: Deactivated successfully. Dec 16 02:05:42.042828 containerd[2088]: time="2025-12-16T02:05:42.042365829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:42.047013 containerd[2088]: time="2025-12-16T02:05:42.046971707Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Dec 16 02:05:42.055235 containerd[2088]: time="2025-12-16T02:05:42.055209218Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:42.065177 containerd[2088]: time="2025-12-16T02:05:42.065126187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:42.065971 containerd[2088]: time="2025-12-16T02:05:42.065860120Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.04357653s" Dec 16 02:05:42.065971 containerd[2088]: time="2025-12-16T02:05:42.065890377Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 02:05:42.066524 containerd[2088]: time="2025-12-16T02:05:42.066489673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 02:05:42.498459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 02:05:42.500976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:05:42.593621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:42.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:42.607835 kernel: audit: type=1130 audit(1765850742.592:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:42.610041 (kubelet)[2969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:05:42.636275 kubelet[2969]: E1216 02:05:42.636212 2969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:05:42.639104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:05:42.639332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:05:42.639851 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105M memory peak. Dec 16 02:05:42.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:05:42.652806 kernel: audit: type=1131 audit(1765850742.638:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:05:43.183223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287562909.mount: Deactivated successfully. Dec 16 02:05:43.229364 containerd[2088]: time="2025-12-16T02:05:43.229312429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:05:43.233106 containerd[2088]: time="2025-12-16T02:05:43.233065098Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 02:05:43.237159 containerd[2088]: time="2025-12-16T02:05:43.237122387Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:05:43.241603 containerd[2088]: time="2025-12-16T02:05:43.241546722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:05:43.242050 containerd[2088]: time="2025-12-16T02:05:43.241883416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.175272065s" Dec 16 02:05:43.242050 containerd[2088]: time="2025-12-16T02:05:43.241909089Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 02:05:43.242424 containerd[2088]: time="2025-12-16T02:05:43.242399348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 02:05:44.056345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2048479128.mount: Deactivated successfully. Dec 16 02:05:45.880525 update_engine[2065]: I20251216 02:05:45.880366 2065 update_attempter.cc:509] Updating boot flags... Dec 16 02:05:46.941351 containerd[2088]: time="2025-12-16T02:05:46.941299442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:46.952651 containerd[2088]: time="2025-12-16T02:05:46.952609059Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69351524" Dec 16 02:05:46.959648 containerd[2088]: time="2025-12-16T02:05:46.959621604Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:46.964573 containerd[2088]: time="2025-12-16T02:05:46.964533121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:05:46.965120 containerd[2088]: time="2025-12-16T02:05:46.965096443Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.722669005s" Dec 16 02:05:46.965172 containerd[2088]: time="2025-12-16T02:05:46.965122747Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 02:05:50.995973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:50.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:50.996440 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105M memory peak. Dec 16 02:05:50.998999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:05:50.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:51.020584 kernel: audit: type=1130 audit(1765850750.995:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:51.020654 kernel: audit: type=1131 audit(1765850750.995:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:51.036986 systemd[1]: Reload requested from client PID 3125 ('systemctl') (unit session-10.scope)... Dec 16 02:05:51.036999 systemd[1]: Reloading... Dec 16 02:05:51.138871 zram_generator::config[3178]: No configuration found. Dec 16 02:05:51.289739 systemd[1]: Reloading finished in 252 ms. Dec 16 02:05:51.307000 audit: BPF prog-id=87 op=LOAD Dec 16 02:05:51.307000 audit: BPF prog-id=81 op=UNLOAD Dec 16 02:05:51.318264 kernel: audit: type=1334 audit(1765850751.307:324): prog-id=87 op=LOAD Dec 16 02:05:51.318322 kernel: audit: type=1334 audit(1765850751.307:325): prog-id=81 op=UNLOAD Dec 16 02:05:51.318348 kernel: audit: type=1334 audit(1765850751.312:326): prog-id=88 op=LOAD Dec 16 02:05:51.312000 audit: BPF prog-id=88 op=LOAD Dec 16 02:05:51.312000 audit: BPF prog-id=89 op=LOAD Dec 16 02:05:51.331501 kernel: audit: type=1334 audit(1765850751.312:327): prog-id=89 op=LOAD Dec 16 02:05:51.331567 kernel: audit: type=1334 audit(1765850751.312:328): prog-id=82 op=UNLOAD Dec 16 02:05:51.312000 audit: BPF prog-id=82 op=UNLOAD Dec 16 02:05:51.312000 audit: BPF prog-id=83 op=UNLOAD Dec 16 02:05:51.336373 kernel: audit: type=1334 audit(1765850751.312:329): prog-id=83 op=UNLOAD Dec 16 02:05:51.312000 audit: BPF prog-id=90 op=LOAD Dec 16 02:05:51.340771 kernel: audit: type=1334 audit(1765850751.312:330): prog-id=90 op=LOAD Dec 16 02:05:51.321000 audit: BPF prog-id=67 op=UNLOAD Dec 16 02:05:51.344874 kernel: audit: type=1334 audit(1765850751.321:331): prog-id=67 op=UNLOAD Dec 16 02:05:51.321000 audit: BPF prog-id=91 op=LOAD Dec 16 02:05:51.321000 audit: BPF prog-id=92 op=LOAD Dec 16 02:05:51.321000 audit: BPF prog-id=68 op=UNLOAD Dec 16 02:05:51.321000 audit: BPF prog-id=69 op=UNLOAD Dec 16 02:05:51.326000 audit: BPF prog-id=93 op=LOAD Dec 16 02:05:51.326000 audit: BPF prog-id=78 op=UNLOAD Dec 16 02:05:51.326000 audit: BPF prog-id=94 op=LOAD Dec 16 02:05:51.331000 audit: BPF prog-id=95 op=LOAD Dec 16 02:05:51.331000 audit: BPF prog-id=79 op=UNLOAD Dec 16 02:05:51.331000 audit: BPF prog-id=80 op=UNLOAD Dec 16 02:05:51.335000 audit: BPF prog-id=96 op=LOAD Dec 16 02:05:51.335000 audit: BPF prog-id=84 op=UNLOAD Dec 16 02:05:51.340000 audit: BPF prog-id=97 op=LOAD Dec 16 02:05:51.344000 audit: BPF prog-id=98 op=LOAD Dec 16 02:05:51.344000 audit: BPF prog-id=85 op=UNLOAD Dec 16 02:05:51.344000 audit: BPF prog-id=86 op=UNLOAD Dec 16 02:05:51.345000 audit: BPF prog-id=99 op=LOAD Dec 16 02:05:51.345000 audit: BPF prog-id=74 op=UNLOAD Dec 16 02:05:51.345000 audit: BPF prog-id=100 op=LOAD Dec 16 02:05:51.345000 audit: BPF prog-id=101 op=LOAD Dec 16 02:05:51.345000 audit: BPF prog-id=75 op=UNLOAD Dec 16 02:05:51.345000 audit: BPF prog-id=76 op=UNLOAD Dec 16 02:05:51.346000 audit: BPF prog-id=102 op=LOAD Dec 16 02:05:51.346000 audit: BPF prog-id=77 op=UNLOAD Dec 16 02:05:51.347000 audit: BPF prog-id=103 op=LOAD Dec 16 02:05:51.347000 audit: BPF prog-id=72 op=UNLOAD Dec 16 02:05:51.348000 audit: BPF prog-id=104 op=LOAD Dec 16 02:05:51.348000 audit: BPF prog-id=73 op=UNLOAD Dec 16 02:05:51.349000 audit: BPF prog-id=105 op=LOAD Dec 16 02:05:51.349000 audit: BPF prog-id=106 op=LOAD Dec 16 02:05:51.349000 audit: BPF prog-id=70 op=UNLOAD Dec 16 02:05:51.349000 audit: BPF prog-id=71 op=UNLOAD Dec 16 02:05:51.361150 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 02:05:51.361332 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 02:05:51.361667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:51.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:05:51.362864 systemd[1]: kubelet.service: Consumed 76ms CPU time, 95.1M memory peak. Dec 16 02:05:51.364137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:05:51.531026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:51.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:51.534873 (kubelet)[3242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:05:51.684464 kubelet[3242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:05:51.684464 kubelet[3242]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:05:51.684464 kubelet[3242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:05:51.684800 kubelet[3242]: I1216 02:05:51.684471 3242 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:05:52.111232 kubelet[3242]: I1216 02:05:52.111191 3242 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 02:05:52.111232 kubelet[3242]: I1216 02:05:52.111225 3242 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:05:52.111447 kubelet[3242]: I1216 02:05:52.111429 3242 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 02:05:52.127756 kubelet[3242]: E1216 02:05:52.126669 3242 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 02:05:52.128149 kubelet[3242]: I1216 02:05:52.128130 3242 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:05:52.134977 kubelet[3242]: I1216 02:05:52.134962 3242 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:05:52.137427 kubelet[3242]: I1216 02:05:52.137406 3242 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 02:05:52.138735 kubelet[3242]: I1216 02:05:52.138704 3242 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:05:52.138925 kubelet[3242]: I1216 02:05:52.138810 3242 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-de7f477aa9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:05:52.139049 kubelet[3242]: I1216 02:05:52.139038 3242 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:05:52.139100 kubelet[3242]: I1216 02:05:52.139092 3242 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 02:05:52.139852 kubelet[3242]: I1216 02:05:52.139835 3242 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:05:52.142264 kubelet[3242]: I1216 02:05:52.142247 3242 kubelet.go:480] "Attempting to sync node with API server" Dec 16 02:05:52.142343 kubelet[3242]: I1216 02:05:52.142335 3242 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:05:52.142400 kubelet[3242]: I1216 02:05:52.142393 3242 kubelet.go:386] "Adding apiserver pod source" Dec 16 02:05:52.142454 kubelet[3242]: I1216 02:05:52.142446 3242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:05:52.146288 kubelet[3242]: E1216 02:05:52.146262 3242 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-de7f477aa9&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 02:05:52.147814 kubelet[3242]: E1216 02:05:52.147772 3242 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 02:05:52.147881 kubelet[3242]: I1216 02:05:52.147857 3242 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:05:52.148261 kubelet[3242]: I1216 02:05:52.148231 3242 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 02:05:52.148306 kubelet[3242]: W1216 02:05:52.148282 3242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 02:05:52.150149 kubelet[3242]: I1216 02:05:52.150129 3242 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 02:05:52.150195 kubelet[3242]: I1216 02:05:52.150173 3242 server.go:1289] "Started kubelet" Dec 16 02:05:52.150270 kubelet[3242]: I1216 02:05:52.150250 3242 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:05:52.151279 kubelet[3242]: I1216 02:05:52.150904 3242 server.go:317] "Adding debug handlers to kubelet server" Dec 16 02:05:52.151754 kubelet[3242]: I1216 02:05:52.151711 3242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:05:52.152020 kubelet[3242]: I1216 02:05:52.151998 3242 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:05:52.153816 kubelet[3242]: E1216 02:05:52.152099 3242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-a-de7f477aa9.18818fe738440d2d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-de7f477aa9,UID:ci-4547.0.0-a-de7f477aa9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-de7f477aa9,},FirstTimestamp:2025-12-16 02:05:52.150146349 +0000 UTC m=+0.612540182,LastTimestamp:2025-12-16 02:05:52.150146349 +0000 UTC m=+0.612540182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-de7f477aa9,}" Dec 16 02:05:52.154417 kubelet[3242]: I1216 02:05:52.154399 3242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:05:52.156071 kubelet[3242]: I1216 02:05:52.156055 3242 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 02:05:52.156260 kubelet[3242]: I1216 02:05:52.156245 3242 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:05:52.157110 kubelet[3242]: E1216 02:05:52.157078 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:52.157526 kubelet[3242]: E1216 02:05:52.157495 3242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-de7f477aa9?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Dec 16 02:05:52.157576 kubelet[3242]: I1216 02:05:52.157565 3242 reconciler.go:26] "Reconciler: start to sync state" Dec 16 02:05:52.157698 kubelet[3242]: I1216 02:05:52.157686 3242 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 02:05:52.158622 kubelet[3242]: E1216 02:05:52.158577 3242 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 02:05:52.158759 kubelet[3242]: I1216 02:05:52.158739 3242 factory.go:223] Registration of the systemd container factory successfully Dec 16 02:05:52.159053 kubelet[3242]: I1216 02:05:52.159018 3242 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:05:52.161063 kubelet[3242]: E1216 02:05:52.161038 3242 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:05:52.161213 kubelet[3242]: I1216 02:05:52.161195 3242 factory.go:223] Registration of the containerd container factory successfully Dec 16 02:05:52.162000 audit[3257]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:52.162000 audit[3257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcaeff2e0 a2=0 a3=0 items=0 ppid=3242 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:05:52.163000 audit[3258]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:52.163000 audit[3258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0b35730 a2=0 a3=0 items=0 ppid=3242 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:05:52.165000 audit[3260]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:52.165000 audit[3260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcaebcd60 a2=0 a3=0 items=0 ppid=3242 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:05:52.167000 audit[3262]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:52.167000 audit[3262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd4a94140 a2=0 a3=0 items=0 ppid=3242 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:05:52.173000 audit[3266]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:52.173000 audit[3266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc9e2e870 a2=0 a3=0 items=0 ppid=3242 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 02:05:52.175005 kubelet[3242]: I1216 02:05:52.174972 3242 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 02:05:52.174000 audit[3267]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:52.174000 audit[3267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc3668950 a2=0 a3=0 items=0 ppid=3242 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:05:52.175926 kubelet[3242]: I1216 02:05:52.175906 3242 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 02:05:52.175926 kubelet[3242]: I1216 02:05:52.175922 3242 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 02:05:52.175966 kubelet[3242]: I1216 02:05:52.175940 3242 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:05:52.175966 kubelet[3242]: I1216 02:05:52.175946 3242 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 02:05:52.175998 kubelet[3242]: E1216 02:05:52.175977 3242 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:05:52.175000 audit[3268]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:52.175000 audit[3268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf4101b0 a2=0 a3=0 items=0 ppid=3242 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:05:52.179074 kubelet[3242]: E1216 02:05:52.179050 3242 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 02:05:52.178000 audit[3271]: NETFILTER_CFG table=mangle:52 family=10 entries=1 op=nft_register_chain pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:52.178000 audit[3271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff16ceac0 a2=0 a3=0 items=0 ppid=3242 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.178000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:05:52.178000 audit[3273]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:52.178000 audit[3273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe6f15c0 a2=0 a3=0 items=0 ppid=3242 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.178000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:05:52.179000 audit[3275]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:52.179000 audit[3275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcce40670 a2=0 a3=0 items=0 ppid=3242 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:05:52.184000 audit[3276]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:05:52.184000 audit[3276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff113eb10 a2=0 a3=0 items=0 ppid=3242 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.184000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:05:52.186000 audit[3278]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:05:52.186000 audit[3278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeceba3b0 a2=0 a3=0 items=0 ppid=3242 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:05:52.191251 kubelet[3242]: I1216 02:05:52.191222 3242 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:05:52.191251 kubelet[3242]: I1216 02:05:52.191249 3242 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:05:52.191323 kubelet[3242]: I1216 02:05:52.191267 3242 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:05:52.196900 kubelet[3242]: I1216 02:05:52.196880 3242 policy_none.go:49] "None policy: Start" Dec 16 02:05:52.196900 kubelet[3242]: I1216 02:05:52.196901 3242 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 02:05:52.196977 kubelet[3242]: I1216 02:05:52.196914 3242 state_mem.go:35] "Initializing new in-memory state store" Dec 16 02:05:52.206479 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 02:05:52.217092 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 02:05:52.219666 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 02:05:52.237673 kubelet[3242]: E1216 02:05:52.237654 3242 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 02:05:52.237959 kubelet[3242]: I1216 02:05:52.237937 3242 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:05:52.238168 kubelet[3242]: I1216 02:05:52.238138 3242 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:05:52.238415 kubelet[3242]: I1216 02:05:52.238401 3242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:05:52.239704 kubelet[3242]: E1216 02:05:52.239656 3242 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:05:52.239906 kubelet[3242]: E1216 02:05:52.239835 3242 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:52.329864 systemd[1]: Created slice kubepods-burstable-pod409398990ce1738c1484dc3509941841.slice - libcontainer container kubepods-burstable-pod409398990ce1738c1484dc3509941841.slice. Dec 16 02:05:52.338638 kubelet[3242]: E1216 02:05:52.338358 3242 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.340634 kubelet[3242]: I1216 02:05:52.340450 3242 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.341027 kubelet[3242]: E1216 02:05:52.341007 3242 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.342996 systemd[1]: Created slice kubepods-burstable-pod023e55097c40b472b177ea7eb42e9abb.slice - libcontainer container kubepods-burstable-pod023e55097c40b472b177ea7eb42e9abb.slice. Dec 16 02:05:52.344959 kubelet[3242]: E1216 02:05:52.344890 3242 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.354926 systemd[1]: Created slice kubepods-burstable-podecef1b4678df78a2aab975e60601b8de.slice - libcontainer container kubepods-burstable-podecef1b4678df78a2aab975e60601b8de.slice. Dec 16 02:05:52.356272 kubelet[3242]: E1216 02:05:52.356256 3242 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.358617 kubelet[3242]: E1216 02:05:52.358591 3242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-de7f477aa9?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Dec 16 02:05:52.358734 kubelet[3242]: I1216 02:05:52.358694 3242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/409398990ce1738c1484dc3509941841-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-de7f477aa9\" (UID: \"409398990ce1738c1484dc3509941841\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.358782 kubelet[3242]: I1216 02:05:52.358739 3242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/409398990ce1738c1484dc3509941841-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-de7f477aa9\" (UID: \"409398990ce1738c1484dc3509941841\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.358782 kubelet[3242]: I1216 02:05:52.358754 3242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.358782 kubelet[3242]: I1216 02:05:52.358777 3242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.358952 kubelet[3242]: I1216 02:05:52.358826 3242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.358952 kubelet[3242]: I1216 02:05:52.358839 3242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ecef1b4678df78a2aab975e60601b8de-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-de7f477aa9\" (UID: \"ecef1b4678df78a2aab975e60601b8de\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.358952 kubelet[3242]: I1216 02:05:52.358847 3242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/409398990ce1738c1484dc3509941841-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-de7f477aa9\" (UID: \"409398990ce1738c1484dc3509941841\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.358952 kubelet[3242]: I1216 02:05:52.358856 3242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.358952 kubelet[3242]: I1216 02:05:52.358867 3242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.543272 kubelet[3242]: I1216 02:05:52.543245 3242 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.543604 kubelet[3242]: E1216 02:05:52.543570 3242 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.640275 containerd[2088]: time="2025-12-16T02:05:52.640235017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-de7f477aa9,Uid:409398990ce1738c1484dc3509941841,Namespace:kube-system,Attempt:0,}" Dec 16 02:05:52.645771 containerd[2088]: time="2025-12-16T02:05:52.645673019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-de7f477aa9,Uid:023e55097c40b472b177ea7eb42e9abb,Namespace:kube-system,Attempt:0,}" Dec 16 02:05:52.657601 containerd[2088]: time="2025-12-16T02:05:52.657552466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-de7f477aa9,Uid:ecef1b4678df78a2aab975e60601b8de,Namespace:kube-system,Attempt:0,}" Dec 16 02:05:52.733537 containerd[2088]: time="2025-12-16T02:05:52.733457617Z" level=info msg="connecting to shim e05e1a61e8d22b12c7803c6e56e2e64a8ab23320866b1ccda532a9125a3d29fd" address="unix:///run/containerd/s/617bfc0713a7c51e01bd683de697dba23497de1d8ade9b1b904542ae435bf63f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:05:52.746073 containerd[2088]: time="2025-12-16T02:05:52.746037327Z" level=info msg="connecting to shim c20d35a8c67e817ac0a63ef73fdc487097f96bc2810d370e5a07024794e55f44" address="unix:///run/containerd/s/e1b320e6aa0d50745b99b285fba546e0fd67e9cb2d74d97ae489065d5ee14550" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:05:52.759712 kubelet[3242]: E1216 02:05:52.759666 3242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-de7f477aa9?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Dec 16 02:05:52.764117 systemd[1]: Started cri-containerd-e05e1a61e8d22b12c7803c6e56e2e64a8ab23320866b1ccda532a9125a3d29fd.scope - libcontainer container e05e1a61e8d22b12c7803c6e56e2e64a8ab23320866b1ccda532a9125a3d29fd. Dec 16 02:05:52.780017 systemd[1]: Started cri-containerd-c20d35a8c67e817ac0a63ef73fdc487097f96bc2810d370e5a07024794e55f44.scope - libcontainer container c20d35a8c67e817ac0a63ef73fdc487097f96bc2810d370e5a07024794e55f44. Dec 16 02:05:52.784000 audit: BPF prog-id=107 op=LOAD Dec 16 02:05:52.785000 audit: BPF prog-id=108 op=LOAD Dec 16 02:05:52.785000 audit[3297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3288 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356531613631653864323262313263373830336336653536653265 Dec 16 02:05:52.785000 audit: BPF prog-id=108 op=UNLOAD Dec 16 02:05:52.785000 audit[3297]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356531613631653864323262313263373830336336653536653265 Dec 16 02:05:52.785000 audit: BPF prog-id=109 op=LOAD Dec 16 02:05:52.785000 audit[3297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3288 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356531613631653864323262313263373830336336653536653265 Dec 16 02:05:52.785000 audit: BPF prog-id=110 op=LOAD Dec 16 02:05:52.785000 audit[3297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3288 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356531613631653864323262313263373830336336653536653265 Dec 16 02:05:52.785000 audit: BPF prog-id=110 op=UNLOAD Dec 16 02:05:52.785000 audit[3297]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356531613631653864323262313263373830336336653536653265 Dec 16 02:05:52.785000 audit: BPF prog-id=109 op=UNLOAD Dec 16 02:05:52.785000 audit[3297]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356531613631653864323262313263373830336336653536653265 Dec 16 02:05:52.785000 audit: BPF prog-id=111 op=LOAD Dec 16 02:05:52.785000 audit[3297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3288 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356531613631653864323262313263373830336336653536653265 Dec 16 02:05:52.787841 containerd[2088]: time="2025-12-16T02:05:52.787397126Z" level=info msg="connecting to shim f6fa7193e8982cb246eb932ca238d361161f05112a51cdd102bfa16815f8037d" address="unix:///run/containerd/s/3254686e9a290361407f5e52044f4c7270845c5d55316e92ad06f0fa4b4c0b3f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:05:52.798000 audit: BPF prog-id=112 op=LOAD Dec 16 02:05:52.799000 audit: BPF prog-id=113 op=LOAD Dec 16 02:05:52.799000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3309 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332306433356138633637653831376163306136336566373366646334 Dec 16 02:05:52.799000 audit: BPF prog-id=113 op=UNLOAD Dec 16 02:05:52.799000 audit[3326]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3309 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332306433356138633637653831376163306136336566373366646334 Dec 16 02:05:52.799000 audit: BPF prog-id=114 op=LOAD Dec 16 02:05:52.799000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3309 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332306433356138633637653831376163306136336566373366646334 Dec 16 02:05:52.800000 audit: BPF prog-id=115 op=LOAD Dec 16 02:05:52.800000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3309 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332306433356138633637653831376163306136336566373366646334 Dec 16 02:05:52.800000 audit: BPF prog-id=115 op=UNLOAD Dec 16 02:05:52.800000 audit[3326]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3309 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332306433356138633637653831376163306136336566373366646334 Dec 16 02:05:52.800000 audit: BPF prog-id=114 op=UNLOAD Dec 16 02:05:52.800000 audit[3326]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3309 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332306433356138633637653831376163306136336566373366646334 Dec 16 02:05:52.800000 audit: BPF prog-id=116 op=LOAD Dec 16 02:05:52.800000 audit[3326]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3309 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332306433356138633637653831376163306136336566373366646334 Dec 16 02:05:52.812090 systemd[1]: Started cri-containerd-f6fa7193e8982cb246eb932ca238d361161f05112a51cdd102bfa16815f8037d.scope - libcontainer container f6fa7193e8982cb246eb932ca238d361161f05112a51cdd102bfa16815f8037d. Dec 16 02:05:52.830000 audit: BPF prog-id=117 op=LOAD Dec 16 02:05:52.831000 audit: BPF prog-id=118 op=LOAD Dec 16 02:05:52.831000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3354 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636666137313933653839383263623234366562393332636132333864 Dec 16 02:05:52.831000 audit: BPF prog-id=118 op=UNLOAD Dec 16 02:05:52.831000 audit[3377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3354 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636666137313933653839383263623234366562393332636132333864 Dec 16 02:05:52.831000 audit: BPF prog-id=119 op=LOAD Dec 16 02:05:52.831000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3354 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636666137313933653839383263623234366562393332636132333864 Dec 16 02:05:52.831000 audit: BPF prog-id=120 op=LOAD Dec 16 02:05:52.831000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3354 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636666137313933653839383263623234366562393332636132333864 Dec 16 02:05:52.831000 audit: BPF prog-id=120 op=UNLOAD Dec 16 02:05:52.831000 audit[3377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3354 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636666137313933653839383263623234366562393332636132333864 Dec 16 02:05:52.831000 audit: BPF prog-id=119 op=UNLOAD Dec 16 02:05:52.831000 audit[3377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3354 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636666137313933653839383263623234366562393332636132333864 Dec 16 02:05:52.831000 audit: BPF prog-id=121 op=LOAD Dec 16 02:05:52.831000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3354 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636666137313933653839383263623234366562393332636132333864 Dec 16 02:05:52.833520 containerd[2088]: time="2025-12-16T02:05:52.833091500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-de7f477aa9,Uid:023e55097c40b472b177ea7eb42e9abb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c20d35a8c67e817ac0a63ef73fdc487097f96bc2810d370e5a07024794e55f44\"" Dec 16 02:05:52.840035 containerd[2088]: time="2025-12-16T02:05:52.840005887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-de7f477aa9,Uid:409398990ce1738c1484dc3509941841,Namespace:kube-system,Attempt:0,} returns sandbox id \"e05e1a61e8d22b12c7803c6e56e2e64a8ab23320866b1ccda532a9125a3d29fd\"" Dec 16 02:05:52.845186 containerd[2088]: time="2025-12-16T02:05:52.845143104Z" level=info msg="CreateContainer within sandbox \"c20d35a8c67e817ac0a63ef73fdc487097f96bc2810d370e5a07024794e55f44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 02:05:52.850881 containerd[2088]: time="2025-12-16T02:05:52.850700503Z" level=info msg="CreateContainer within sandbox \"e05e1a61e8d22b12c7803c6e56e2e64a8ab23320866b1ccda532a9125a3d29fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 02:05:52.869121 containerd[2088]: time="2025-12-16T02:05:52.869092780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-de7f477aa9,Uid:ecef1b4678df78a2aab975e60601b8de,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6fa7193e8982cb246eb932ca238d361161f05112a51cdd102bfa16815f8037d\"" Dec 16 02:05:52.877633 containerd[2088]: time="2025-12-16T02:05:52.877610804Z" level=info msg="Container 26eb8965023449c2ec320e8183f4d561eb3675e173565e3c812391d92ae73077: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:05:52.878511 containerd[2088]: time="2025-12-16T02:05:52.878484168Z" level=info msg="CreateContainer within sandbox \"f6fa7193e8982cb246eb932ca238d361161f05112a51cdd102bfa16815f8037d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 02:05:52.886418 containerd[2088]: time="2025-12-16T02:05:52.886389100Z" level=info msg="Container 62f60fb1be90c0585b6126b298f0def3e098ee463263a306334fa05314b2f595: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:05:52.899944 containerd[2088]: time="2025-12-16T02:05:52.899911440Z" level=info msg="CreateContainer within sandbox \"c20d35a8c67e817ac0a63ef73fdc487097f96bc2810d370e5a07024794e55f44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"26eb8965023449c2ec320e8183f4d561eb3675e173565e3c812391d92ae73077\"" Dec 16 02:05:52.900591 containerd[2088]: time="2025-12-16T02:05:52.900567158Z" level=info msg="StartContainer for \"26eb8965023449c2ec320e8183f4d561eb3675e173565e3c812391d92ae73077\"" Dec 16 02:05:52.901439 containerd[2088]: time="2025-12-16T02:05:52.901405818Z" level=info msg="connecting to shim 26eb8965023449c2ec320e8183f4d561eb3675e173565e3c812391d92ae73077" address="unix:///run/containerd/s/e1b320e6aa0d50745b99b285fba546e0fd67e9cb2d74d97ae489065d5ee14550" protocol=ttrpc version=3 Dec 16 02:05:52.918950 systemd[1]: Started cri-containerd-26eb8965023449c2ec320e8183f4d561eb3675e173565e3c812391d92ae73077.scope - libcontainer container 26eb8965023449c2ec320e8183f4d561eb3675e173565e3c812391d92ae73077. Dec 16 02:05:52.930829 containerd[2088]: time="2025-12-16T02:05:52.930682588Z" level=info msg="CreateContainer within sandbox \"e05e1a61e8d22b12c7803c6e56e2e64a8ab23320866b1ccda532a9125a3d29fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"62f60fb1be90c0585b6126b298f0def3e098ee463263a306334fa05314b2f595\"" Dec 16 02:05:52.932229 containerd[2088]: time="2025-12-16T02:05:52.931233422Z" level=info msg="StartContainer for \"62f60fb1be90c0585b6126b298f0def3e098ee463263a306334fa05314b2f595\"" Dec 16 02:05:52.932229 containerd[2088]: time="2025-12-16T02:05:52.931989415Z" level=info msg="connecting to shim 62f60fb1be90c0585b6126b298f0def3e098ee463263a306334fa05314b2f595" address="unix:///run/containerd/s/617bfc0713a7c51e01bd683de697dba23497de1d8ade9b1b904542ae435bf63f" protocol=ttrpc version=3 Dec 16 02:05:52.932000 audit: BPF prog-id=122 op=LOAD Dec 16 02:05:52.933000 audit: BPF prog-id=123 op=LOAD Dec 16 02:05:52.933000 audit[3423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3309 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656238393635303233343439633265633332306538313833663464 Dec 16 02:05:52.933000 audit: BPF prog-id=123 op=UNLOAD Dec 16 02:05:52.933000 audit[3423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3309 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656238393635303233343439633265633332306538313833663464 Dec 16 02:05:52.933000 audit: BPF prog-id=124 op=LOAD Dec 16 02:05:52.933000 audit[3423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3309 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656238393635303233343439633265633332306538313833663464 Dec 16 02:05:52.933000 audit: BPF prog-id=125 op=LOAD Dec 16 02:05:52.933000 audit[3423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3309 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656238393635303233343439633265633332306538313833663464 Dec 16 02:05:52.933000 audit: BPF prog-id=125 op=UNLOAD Dec 16 02:05:52.933000 audit[3423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3309 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656238393635303233343439633265633332306538313833663464 Dec 16 02:05:52.934000 audit: BPF prog-id=124 op=UNLOAD Dec 16 02:05:52.934000 audit[3423]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3309 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.935248 containerd[2088]: time="2025-12-16T02:05:52.935218529Z" level=info msg="Container 1dea86d3b58c0151054a73d1c92c63a4ed0f7bb1945fac6c3ab4350338b103a9: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:05:52.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656238393635303233343439633265633332306538313833663464 Dec 16 02:05:52.934000 audit: BPF prog-id=126 op=LOAD Dec 16 02:05:52.934000 audit[3423]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3309 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:52.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236656238393635303233343439633265633332306538313833663464 Dec 16 02:05:52.946917 kubelet[3242]: I1216 02:05:52.946376 3242 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.946917 kubelet[3242]: E1216 02:05:52.946634 3242 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:52.955969 systemd[1]: Started cri-containerd-62f60fb1be90c0585b6126b298f0def3e098ee463263a306334fa05314b2f595.scope - libcontainer container 62f60fb1be90c0585b6126b298f0def3e098ee463263a306334fa05314b2f595. Dec 16 02:05:52.958490 containerd[2088]: time="2025-12-16T02:05:52.958429420Z" level=info msg="CreateContainer within sandbox \"f6fa7193e8982cb246eb932ca238d361161f05112a51cdd102bfa16815f8037d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1dea86d3b58c0151054a73d1c92c63a4ed0f7bb1945fac6c3ab4350338b103a9\"" Dec 16 02:05:52.959337 containerd[2088]: time="2025-12-16T02:05:52.959299136Z" level=info msg="StartContainer for \"1dea86d3b58c0151054a73d1c92c63a4ed0f7bb1945fac6c3ab4350338b103a9\"" Dec 16 02:05:52.960352 containerd[2088]: time="2025-12-16T02:05:52.960333586Z" level=info msg="connecting to shim 1dea86d3b58c0151054a73d1c92c63a4ed0f7bb1945fac6c3ab4350338b103a9" address="unix:///run/containerd/s/3254686e9a290361407f5e52044f4c7270845c5d55316e92ad06f0fa4b4c0b3f" protocol=ttrpc version=3 Dec 16 02:05:52.981071 systemd[1]: Started cri-containerd-1dea86d3b58c0151054a73d1c92c63a4ed0f7bb1945fac6c3ab4350338b103a9.scope - libcontainer container 1dea86d3b58c0151054a73d1c92c63a4ed0f7bb1945fac6c3ab4350338b103a9. Dec 16 02:05:53.011417 containerd[2088]: time="2025-12-16T02:05:53.011364288Z" level=info msg="StartContainer for \"26eb8965023449c2ec320e8183f4d561eb3675e173565e3c812391d92ae73077\" returns successfully" Dec 16 02:05:53.013000 audit: BPF prog-id=127 op=LOAD Dec 16 02:05:53.014902 kubelet[3242]: E1216 02:05:53.014875 3242 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-de7f477aa9&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 02:05:53.014000 audit: BPF prog-id=128 op=LOAD Dec 16 02:05:53.014000 audit[3468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3354 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656138366433623538633031353130353461373364316339326336 Dec 16 02:05:53.014000 audit: BPF prog-id=128 op=UNLOAD Dec 16 02:05:53.014000 audit[3468]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3354 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656138366433623538633031353130353461373364316339326336 Dec 16 02:05:53.014000 audit: BPF prog-id=129 op=LOAD Dec 16 02:05:53.014000 audit[3468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3354 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656138366433623538633031353130353461373364316339326336 Dec 16 02:05:53.014000 audit: BPF prog-id=130 op=LOAD Dec 16 02:05:53.014000 audit[3468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3354 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656138366433623538633031353130353461373364316339326336 Dec 16 02:05:53.014000 audit: BPF prog-id=130 op=UNLOAD Dec 16 02:05:53.014000 audit[3468]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3354 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656138366433623538633031353130353461373364316339326336 Dec 16 02:05:53.014000 audit: BPF prog-id=129 op=UNLOAD Dec 16 02:05:53.014000 audit[3468]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3354 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656138366433623538633031353130353461373364316339326336 Dec 16 02:05:53.015000 audit: BPF prog-id=131 op=LOAD Dec 16 02:05:53.014000 audit: BPF prog-id=132 op=LOAD Dec 16 02:05:53.014000 audit[3468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3354 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164656138366433623538633031353130353461373364316339326336 Dec 16 02:05:53.016000 audit: BPF prog-id=133 op=LOAD Dec 16 02:05:53.016000 audit[3442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3288 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663630666231626539306330353835623631323662323938663064 Dec 16 02:05:53.016000 audit: BPF prog-id=133 op=UNLOAD Dec 16 02:05:53.016000 audit[3442]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663630666231626539306330353835623631323662323938663064 Dec 16 02:05:53.016000 audit: BPF prog-id=134 op=LOAD Dec 16 02:05:53.016000 audit[3442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3288 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663630666231626539306330353835623631323662323938663064 Dec 16 02:05:53.016000 audit: BPF prog-id=135 op=LOAD Dec 16 02:05:53.016000 audit[3442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3288 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663630666231626539306330353835623631323662323938663064 Dec 16 02:05:53.017000 audit: BPF prog-id=135 op=UNLOAD Dec 16 02:05:53.017000 audit[3442]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663630666231626539306330353835623631323662323938663064 Dec 16 02:05:53.019000 audit: BPF prog-id=134 op=UNLOAD Dec 16 02:05:53.019000 audit[3442]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663630666231626539306330353835623631323662323938663064 Dec 16 02:05:53.019000 audit: BPF prog-id=136 op=LOAD Dec 16 02:05:53.019000 audit[3442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3288 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:05:53.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632663630666231626539306330353835623631323662323938663064 Dec 16 02:05:53.065124 containerd[2088]: time="2025-12-16T02:05:53.064420120Z" level=info msg="StartContainer for \"1dea86d3b58c0151054a73d1c92c63a4ed0f7bb1945fac6c3ab4350338b103a9\" returns successfully" Dec 16 02:05:53.065124 containerd[2088]: time="2025-12-16T02:05:53.064682512Z" level=info msg="StartContainer for \"62f60fb1be90c0585b6126b298f0def3e098ee463263a306334fa05314b2f595\" returns successfully" Dec 16 02:05:53.188671 kubelet[3242]: E1216 02:05:53.188640 3242 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:53.192593 kubelet[3242]: E1216 02:05:53.192569 3242 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:53.194077 kubelet[3242]: E1216 02:05:53.194057 3242 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:53.748270 kubelet[3242]: I1216 02:05:53.748239 3242 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:54.146521 kubelet[3242]: E1216 02:05:54.146376 3242 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:54.196759 kubelet[3242]: E1216 02:05:54.196528 3242 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:54.197082 kubelet[3242]: E1216 02:05:54.197068 3242 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:54.304877 kubelet[3242]: I1216 02:05:54.304806 3242 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:54.305324 kubelet[3242]: E1216 02:05:54.305050 3242 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547.0.0-a-de7f477aa9\": node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:54.327708 kubelet[3242]: E1216 02:05:54.327684 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:54.427916 kubelet[3242]: E1216 02:05:54.427766 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:54.528259 kubelet[3242]: E1216 02:05:54.528213 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:54.628872 kubelet[3242]: E1216 02:05:54.628775 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:54.729760 kubelet[3242]: E1216 02:05:54.729638 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:54.830473 kubelet[3242]: E1216 02:05:54.830433 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:54.931238 kubelet[3242]: E1216 02:05:54.931198 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.031668 kubelet[3242]: E1216 02:05:55.031631 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.132487 kubelet[3242]: E1216 02:05:55.132439 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.197749 kubelet[3242]: E1216 02:05:55.197684 3242 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:55.233201 kubelet[3242]: E1216 02:05:55.233163 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.334204 kubelet[3242]: E1216 02:05:55.334090 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.434762 kubelet[3242]: E1216 02:05:55.434718 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.535920 kubelet[3242]: E1216 02:05:55.535860 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.636840 kubelet[3242]: E1216 02:05:55.636722 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.737460 kubelet[3242]: E1216 02:05:55.737413 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.837974 kubelet[3242]: E1216 02:05:55.837927 3242 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-de7f477aa9\" not found" Dec 16 02:05:55.958138 kubelet[3242]: I1216 02:05:55.958038 3242 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:55.969048 kubelet[3242]: I1216 02:05:55.969002 3242 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 02:05:55.969147 kubelet[3242]: I1216 02:05:55.969136 3242 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:55.981652 kubelet[3242]: I1216 02:05:55.981627 3242 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 02:05:55.981737 kubelet[3242]: I1216 02:05:55.981704 3242 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:55.985417 kubelet[3242]: I1216 02:05:55.985387 3242 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 02:05:56.148056 kubelet[3242]: I1216 02:05:56.148028 3242 apiserver.go:52] "Watching apiserver" Dec 16 02:05:56.158765 kubelet[3242]: I1216 02:05:56.158736 3242 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 02:05:56.614513 systemd[1]: Reload requested from client PID 3525 ('systemctl') (unit session-10.scope)... Dec 16 02:05:56.614531 systemd[1]: Reloading... Dec 16 02:05:56.698821 zram_generator::config[3575]: No configuration found. Dec 16 02:05:56.897567 systemd[1]: Reloading finished in 282 ms. Dec 16 02:05:56.932651 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:05:56.945179 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 02:05:56.945394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:56.961341 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 02:05:56.961405 kernel: audit: type=1131 audit(1765850756.944:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:56.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:56.945445 systemd[1]: kubelet.service: Consumed 736ms CPU time, 127.2M memory peak. Dec 16 02:05:56.950079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:05:56.961000 audit: BPF prog-id=137 op=LOAD Dec 16 02:05:56.966458 kernel: audit: type=1334 audit(1765850756.961:427): prog-id=137 op=LOAD Dec 16 02:05:56.968000 audit: BPF prog-id=103 op=UNLOAD Dec 16 02:05:56.968000 audit: BPF prog-id=138 op=LOAD Dec 16 02:05:56.977706 kernel: audit: type=1334 audit(1765850756.968:428): prog-id=103 op=UNLOAD Dec 16 02:05:56.977759 kernel: audit: type=1334 audit(1765850756.968:429): prog-id=138 op=LOAD Dec 16 02:05:56.968000 audit: BPF prog-id=96 op=UNLOAD Dec 16 02:05:56.983173 kernel: audit: type=1334 audit(1765850756.968:430): prog-id=96 op=UNLOAD Dec 16 02:05:56.983240 kernel: audit: type=1334 audit(1765850756.968:431): prog-id=139 op=LOAD Dec 16 02:05:56.968000 audit: BPF prog-id=139 op=LOAD Dec 16 02:05:56.968000 audit: BPF prog-id=140 op=LOAD Dec 16 02:05:56.989824 kernel: audit: type=1334 audit(1765850756.968:432): prog-id=140 op=LOAD Dec 16 02:05:56.968000 audit: BPF prog-id=97 op=UNLOAD Dec 16 02:05:56.994791 kernel: audit: type=1334 audit(1765850756.968:433): prog-id=97 op=UNLOAD Dec 16 02:05:56.968000 audit: BPF prog-id=98 op=UNLOAD Dec 16 02:05:56.998834 kernel: audit: type=1334 audit(1765850756.968:434): prog-id=98 op=UNLOAD Dec 16 02:05:56.998882 kernel: audit: type=1334 audit(1765850756.973:435): prog-id=141 op=LOAD Dec 16 02:05:56.973000 audit: BPF prog-id=141 op=LOAD Dec 16 02:05:56.973000 audit: BPF prog-id=102 op=UNLOAD Dec 16 02:05:56.977000 audit: BPF prog-id=142 op=LOAD Dec 16 02:05:56.977000 audit: BPF prog-id=143 op=LOAD Dec 16 02:05:56.977000 audit: BPF prog-id=105 op=UNLOAD Dec 16 02:05:56.977000 audit: BPF prog-id=106 op=UNLOAD Dec 16 02:05:56.985000 audit: BPF prog-id=144 op=LOAD Dec 16 02:05:56.985000 audit: BPF prog-id=104 op=UNLOAD Dec 16 02:05:56.989000 audit: BPF prog-id=145 op=LOAD Dec 16 02:05:56.989000 audit: BPF prog-id=87 op=UNLOAD Dec 16 02:05:56.989000 audit: BPF prog-id=146 op=LOAD Dec 16 02:05:56.989000 audit: BPF prog-id=147 op=LOAD Dec 16 02:05:56.989000 audit: BPF prog-id=88 op=UNLOAD Dec 16 02:05:56.989000 audit: BPF prog-id=89 op=UNLOAD Dec 16 02:05:57.002000 audit: BPF prog-id=148 op=LOAD Dec 16 02:05:57.002000 audit: BPF prog-id=93 op=UNLOAD Dec 16 02:05:57.002000 audit: BPF prog-id=149 op=LOAD Dec 16 02:05:57.002000 audit: BPF prog-id=150 op=LOAD Dec 16 02:05:57.002000 audit: BPF prog-id=94 op=UNLOAD Dec 16 02:05:57.002000 audit: BPF prog-id=95 op=UNLOAD Dec 16 02:05:57.002000 audit: BPF prog-id=151 op=LOAD Dec 16 02:05:57.002000 audit: BPF prog-id=99 op=UNLOAD Dec 16 02:05:57.003000 audit: BPF prog-id=152 op=LOAD Dec 16 02:05:57.003000 audit: BPF prog-id=153 op=LOAD Dec 16 02:05:57.003000 audit: BPF prog-id=100 op=UNLOAD Dec 16 02:05:57.003000 audit: BPF prog-id=101 op=UNLOAD Dec 16 02:05:57.004000 audit: BPF prog-id=154 op=LOAD Dec 16 02:05:57.004000 audit: BPF prog-id=90 op=UNLOAD Dec 16 02:05:57.004000 audit: BPF prog-id=155 op=LOAD Dec 16 02:05:57.004000 audit: BPF prog-id=156 op=LOAD Dec 16 02:05:57.004000 audit: BPF prog-id=91 op=UNLOAD Dec 16 02:05:57.004000 audit: BPF prog-id=92 op=UNLOAD Dec 16 02:05:57.094767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:05:57.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:05:57.099053 (kubelet)[3639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:05:57.227583 kubelet[3639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:05:57.227583 kubelet[3639]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:05:57.227583 kubelet[3639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:05:57.227583 kubelet[3639]: I1216 02:05:57.227127 3639 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:05:57.232258 kubelet[3639]: I1216 02:05:57.232231 3639 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 02:05:57.232451 kubelet[3639]: I1216 02:05:57.232345 3639 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:05:57.232608 kubelet[3639]: I1216 02:05:57.232596 3639 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 02:05:57.233809 kubelet[3639]: I1216 02:05:57.233592 3639 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 02:05:57.236817 kubelet[3639]: I1216 02:05:57.235272 3639 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:05:57.241922 kubelet[3639]: I1216 02:05:57.241902 3639 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:05:57.244354 kubelet[3639]: I1216 02:05:57.244338 3639 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 02:05:57.244611 kubelet[3639]: I1216 02:05:57.244593 3639 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:05:57.244780 kubelet[3639]: I1216 02:05:57.244666 3639 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-de7f477aa9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:05:57.244925 kubelet[3639]: I1216 02:05:57.244913 3639 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:05:57.245032 kubelet[3639]: I1216 02:05:57.244964 3639 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 02:05:57.245032 kubelet[3639]: I1216 02:05:57.245006 3639 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:05:57.245231 kubelet[3639]: I1216 02:05:57.245219 3639 kubelet.go:480] "Attempting to sync node with API server" Dec 16 02:05:57.245292 kubelet[3639]: I1216 02:05:57.245285 3639 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:05:57.245346 kubelet[3639]: I1216 02:05:57.245340 3639 kubelet.go:386] "Adding apiserver pod source" Dec 16 02:05:57.245393 kubelet[3639]: I1216 02:05:57.245387 3639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:05:57.248213 kubelet[3639]: I1216 02:05:57.247588 3639 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:05:57.248377 kubelet[3639]: I1216 02:05:57.248356 3639 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 02:05:57.254156 kubelet[3639]: I1216 02:05:57.254000 3639 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 02:05:57.254156 kubelet[3639]: I1216 02:05:57.254031 3639 server.go:1289] "Started kubelet" Dec 16 02:05:57.256384 kubelet[3639]: I1216 02:05:57.256352 3639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:05:57.268736 kubelet[3639]: I1216 02:05:57.268703 3639 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:05:57.269321 kubelet[3639]: I1216 02:05:57.269303 3639 server.go:317] "Adding debug handlers to kubelet server" Dec 16 02:05:57.271481 kubelet[3639]: I1216 02:05:57.271433 3639 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:05:57.271657 kubelet[3639]: I1216 02:05:57.271638 3639 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:05:57.272141 kubelet[3639]: I1216 02:05:57.271759 3639 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:05:57.272809 kubelet[3639]: I1216 02:05:57.272489 3639 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 02:05:57.275259 kubelet[3639]: E1216 02:05:57.275048 3639 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:05:57.275343 kubelet[3639]: I1216 02:05:57.275319 3639 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:05:57.277251 kubelet[3639]: I1216 02:05:57.277225 3639 factory.go:223] Registration of the containerd container factory successfully Dec 16 02:05:57.277251 kubelet[3639]: I1216 02:05:57.277246 3639 factory.go:223] Registration of the systemd container factory successfully Dec 16 02:05:57.277428 kubelet[3639]: I1216 02:05:57.277416 3639 reconciler.go:26] "Reconciler: start to sync state" Dec 16 02:05:57.282615 kubelet[3639]: I1216 02:05:57.282574 3639 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 02:05:57.283291 kubelet[3639]: I1216 02:05:57.277228 3639 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 02:05:57.288511 kubelet[3639]: I1216 02:05:57.288466 3639 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 02:05:57.288511 kubelet[3639]: I1216 02:05:57.288483 3639 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 02:05:57.288511 kubelet[3639]: I1216 02:05:57.288496 3639 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:05:57.288511 kubelet[3639]: I1216 02:05:57.288500 3639 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 02:05:57.288818 kubelet[3639]: E1216 02:05:57.288530 3639 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:05:57.308401 kubelet[3639]: I1216 02:05:57.308378 3639 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:05:57.308401 kubelet[3639]: I1216 02:05:57.308394 3639 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:05:57.308488 kubelet[3639]: I1216 02:05:57.308410 3639 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:05:57.308505 kubelet[3639]: I1216 02:05:57.308497 3639 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 02:05:57.308522 kubelet[3639]: I1216 02:05:57.308504 3639 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 02:05:57.308522 kubelet[3639]: I1216 02:05:57.308516 3639 policy_none.go:49] "None policy: Start" Dec 16 02:05:57.308548 kubelet[3639]: I1216 02:05:57.308523 3639 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 02:05:57.308548 kubelet[3639]: I1216 02:05:57.308530 3639 state_mem.go:35] "Initializing new in-memory state store" Dec 16 02:05:57.308596 kubelet[3639]: I1216 02:05:57.308582 3639 state_mem.go:75] "Updated machine memory state" Dec 16 02:05:57.312035 kubelet[3639]: E1216 02:05:57.311908 3639 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 02:05:57.312091 kubelet[3639]: I1216 02:05:57.312042 3639 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:05:57.312091 kubelet[3639]: I1216 02:05:57.312051 3639 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:05:57.312943 kubelet[3639]: I1216 02:05:57.312816 3639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:05:57.314143 kubelet[3639]: E1216 02:05:57.313981 3639 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:05:57.389891 kubelet[3639]: I1216 02:05:57.389857 3639 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.390200 kubelet[3639]: I1216 02:05:57.390109 3639 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.390554 kubelet[3639]: I1216 02:05:57.390478 3639 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.415211 kubelet[3639]: I1216 02:05:57.414955 3639 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 02:05:57.415211 kubelet[3639]: E1216 02:05:57.415026 3639 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-de7f477aa9\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.415812 kubelet[3639]: I1216 02:05:57.415778 3639 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 02:05:57.415885 kubelet[3639]: E1216 02:05:57.415826 3639 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-de7f477aa9\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.416219 kubelet[3639]: I1216 02:05:57.415915 3639 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 02:05:57.416219 kubelet[3639]: E1216 02:05:57.415951 3639 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" already exists" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.417448 kubelet[3639]: I1216 02:05:57.417425 3639 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.435183 kubelet[3639]: I1216 02:05:57.434926 3639 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.435183 kubelet[3639]: I1216 02:05:57.435026 3639 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.579012 kubelet[3639]: I1216 02:05:57.578977 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/409398990ce1738c1484dc3509941841-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-de7f477aa9\" (UID: \"409398990ce1738c1484dc3509941841\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.579012 kubelet[3639]: I1216 02:05:57.579010 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/409398990ce1738c1484dc3509941841-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-de7f477aa9\" (UID: \"409398990ce1738c1484dc3509941841\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.579213 kubelet[3639]: I1216 02:05:57.579029 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/409398990ce1738c1484dc3509941841-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-de7f477aa9\" (UID: \"409398990ce1738c1484dc3509941841\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.579213 kubelet[3639]: I1216 02:05:57.579043 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.579213 kubelet[3639]: I1216 02:05:57.579052 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.579213 kubelet[3639]: I1216 02:05:57.579086 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.579213 kubelet[3639]: I1216 02:05:57.579097 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.579301 kubelet[3639]: I1216 02:05:57.579110 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/023e55097c40b472b177ea7eb42e9abb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-de7f477aa9\" (UID: \"023e55097c40b472b177ea7eb42e9abb\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:57.579301 kubelet[3639]: I1216 02:05:57.579132 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ecef1b4678df78a2aab975e60601b8de-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-de7f477aa9\" (UID: \"ecef1b4678df78a2aab975e60601b8de\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:58.247762 kubelet[3639]: I1216 02:05:58.247143 3639 apiserver.go:52] "Watching apiserver" Dec 16 02:05:58.283392 kubelet[3639]: I1216 02:05:58.283362 3639 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 02:05:58.299374 kubelet[3639]: I1216 02:05:58.299349 3639 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:58.299749 kubelet[3639]: I1216 02:05:58.299732 3639 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:58.323172 kubelet[3639]: I1216 02:05:58.323151 3639 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 02:05:58.323255 kubelet[3639]: E1216 02:05:58.323193 3639 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-de7f477aa9\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:58.323586 kubelet[3639]: I1216 02:05:58.323520 3639 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 02:05:58.323654 kubelet[3639]: E1216 02:05:58.323591 3639 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-de7f477aa9\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" Dec 16 02:05:58.334228 kubelet[3639]: I1216 02:05:58.334122 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-a-de7f477aa9" podStartSLOduration=3.334109035 podStartE2EDuration="3.334109035s" podCreationTimestamp="2025-12-16 02:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:05:58.323778004 +0000 UTC m=+1.221431780" watchObservedRunningTime="2025-12-16 02:05:58.334109035 +0000 UTC m=+1.231762811" Dec 16 02:05:58.349963 kubelet[3639]: I1216 02:05:58.349919 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-a-de7f477aa9" podStartSLOduration=3.34990549 podStartE2EDuration="3.34990549s" podCreationTimestamp="2025-12-16 02:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:05:58.3341661 +0000 UTC m=+1.231819916" watchObservedRunningTime="2025-12-16 02:05:58.34990549 +0000 UTC m=+1.247559266" Dec 16 02:05:58.370272 kubelet[3639]: I1216 02:05:58.370197 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-de7f477aa9" podStartSLOduration=3.370183028 podStartE2EDuration="3.370183028s" podCreationTimestamp="2025-12-16 02:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:05:58.350253982 +0000 UTC m=+1.247907758" watchObservedRunningTime="2025-12-16 02:05:58.370183028 +0000 UTC m=+1.267836804" Dec 16 02:06:01.461352 kubelet[3639]: I1216 02:06:01.461307 3639 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 02:06:01.462438 containerd[2088]: time="2025-12-16T02:06:01.462018512Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 02:06:01.462655 kubelet[3639]: I1216 02:06:01.462163 3639 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 02:06:02.630522 systemd[1]: Created slice kubepods-besteffort-poda30666d1_0166_4864_aae6_f0c78f4cd25b.slice - libcontainer container kubepods-besteffort-poda30666d1_0166_4864_aae6_f0c78f4cd25b.slice. Dec 16 02:06:02.705876 kubelet[3639]: I1216 02:06:02.705814 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a30666d1-0166-4864-aae6-f0c78f4cd25b-kube-proxy\") pod \"kube-proxy-lrmdw\" (UID: \"a30666d1-0166-4864-aae6-f0c78f4cd25b\") " pod="kube-system/kube-proxy-lrmdw" Dec 16 02:06:02.705876 kubelet[3639]: I1216 02:06:02.705843 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a30666d1-0166-4864-aae6-f0c78f4cd25b-xtables-lock\") pod \"kube-proxy-lrmdw\" (UID: \"a30666d1-0166-4864-aae6-f0c78f4cd25b\") " pod="kube-system/kube-proxy-lrmdw" Dec 16 02:06:02.705876 kubelet[3639]: I1216 02:06:02.705857 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a30666d1-0166-4864-aae6-f0c78f4cd25b-lib-modules\") pod \"kube-proxy-lrmdw\" (UID: \"a30666d1-0166-4864-aae6-f0c78f4cd25b\") " pod="kube-system/kube-proxy-lrmdw" Dec 16 02:06:02.706285 kubelet[3639]: I1216 02:06:02.706252 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kd9\" (UniqueName: \"kubernetes.io/projected/a30666d1-0166-4864-aae6-f0c78f4cd25b-kube-api-access-z2kd9\") pod \"kube-proxy-lrmdw\" (UID: \"a30666d1-0166-4864-aae6-f0c78f4cd25b\") " pod="kube-system/kube-proxy-lrmdw" Dec 16 02:06:02.756007 systemd[1]: Created slice kubepods-besteffort-pod31b7ae7f_ca01_444c_a85a_33f70cdc5716.slice - libcontainer container kubepods-besteffort-pod31b7ae7f_ca01_444c_a85a_33f70cdc5716.slice. Dec 16 02:06:02.806642 kubelet[3639]: I1216 02:06:02.806603 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwd4\" (UniqueName: \"kubernetes.io/projected/31b7ae7f-ca01-444c-a85a-33f70cdc5716-kube-api-access-qgwd4\") pod \"tigera-operator-7dcd859c48-zb7cs\" (UID: \"31b7ae7f-ca01-444c-a85a-33f70cdc5716\") " pod="tigera-operator/tigera-operator-7dcd859c48-zb7cs" Dec 16 02:06:02.806642 kubelet[3639]: I1216 02:06:02.806654 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31b7ae7f-ca01-444c-a85a-33f70cdc5716-var-lib-calico\") pod \"tigera-operator-7dcd859c48-zb7cs\" (UID: \"31b7ae7f-ca01-444c-a85a-33f70cdc5716\") " pod="tigera-operator/tigera-operator-7dcd859c48-zb7cs" Dec 16 02:06:02.942058 containerd[2088]: time="2025-12-16T02:06:02.941757518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lrmdw,Uid:a30666d1-0166-4864-aae6-f0c78f4cd25b,Namespace:kube-system,Attempt:0,}" Dec 16 02:06:02.995976 containerd[2088]: time="2025-12-16T02:06:02.995936026Z" level=info msg="connecting to shim 50146dcb0e691463262085a8b7644270ba6400363d5233d4f5d9a15f1d0393e8" address="unix:///run/containerd/s/c7ac86865bf4ee201cafac0ea6d23146ae4bc9391f69df57207237b4544f4580" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:06:03.019955 systemd[1]: Started cri-containerd-50146dcb0e691463262085a8b7644270ba6400363d5233d4f5d9a15f1d0393e8.scope - libcontainer container 50146dcb0e691463262085a8b7644270ba6400363d5233d4f5d9a15f1d0393e8. Dec 16 02:06:03.028000 audit: BPF prog-id=157 op=LOAD Dec 16 02:06:03.032193 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 02:06:03.032247 kernel: audit: type=1334 audit(1765850763.028:468): prog-id=157 op=LOAD Dec 16 02:06:03.035000 audit: BPF prog-id=158 op=LOAD Dec 16 02:06:03.040471 kernel: audit: type=1334 audit(1765850763.035:469): prog-id=158 op=LOAD Dec 16 02:06:03.056541 kernel: audit: type=1300 audit(1765850763.035:469): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.035000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.073889 kernel: audit: type=1327 audit(1765850763.035:469): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.035000 audit: BPF prog-id=158 op=UNLOAD Dec 16 02:06:03.078499 kernel: audit: type=1334 audit(1765850763.035:470): prog-id=158 op=UNLOAD Dec 16 02:06:03.079038 containerd[2088]: time="2025-12-16T02:06:03.079005582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zb7cs,Uid:31b7ae7f-ca01-444c-a85a-33f70cdc5716,Namespace:tigera-operator,Attempt:0,}" Dec 16 02:06:03.035000 audit[3710]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.095603 kernel: audit: type=1300 audit(1765850763.035:470): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.112456 kernel: audit: type=1327 audit(1765850763.035:470): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.036000 audit: BPF prog-id=159 op=LOAD Dec 16 02:06:03.117201 kernel: audit: type=1334 audit(1765850763.036:471): prog-id=159 op=LOAD Dec 16 02:06:03.036000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.133582 kernel: audit: type=1300 audit(1765850763.036:471): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.150743 kernel: audit: type=1327 audit(1765850763.036:471): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.040000 audit: BPF prog-id=160 op=LOAD Dec 16 02:06:03.040000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.056000 audit: BPF prog-id=160 op=UNLOAD Dec 16 02:06:03.056000 audit[3710]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.056000 audit: BPF prog-id=159 op=UNLOAD Dec 16 02:06:03.056000 audit[3710]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.056000 audit: BPF prog-id=161 op=LOAD Dec 16 02:06:03.056000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3699 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530313436646362306536393134363332363230383561386237363434 Dec 16 02:06:03.174616 containerd[2088]: time="2025-12-16T02:06:03.174509037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lrmdw,Uid:a30666d1-0166-4864-aae6-f0c78f4cd25b,Namespace:kube-system,Attempt:0,} returns sandbox id \"50146dcb0e691463262085a8b7644270ba6400363d5233d4f5d9a15f1d0393e8\"" Dec 16 02:06:03.185773 containerd[2088]: time="2025-12-16T02:06:03.185450720Z" level=info msg="CreateContainer within sandbox \"50146dcb0e691463262085a8b7644270ba6400363d5233d4f5d9a15f1d0393e8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 02:06:03.203767 containerd[2088]: time="2025-12-16T02:06:03.203669342Z" level=info msg="connecting to shim a6dfd77674379c9c8df0449afa55061287a2bb5b5763dfbdc0de87b5a158f75b" address="unix:///run/containerd/s/1d08f373bfb10f4efe2863cf6a5201e9a362e59a179a88528b1b581318ba5996" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:06:03.219010 containerd[2088]: time="2025-12-16T02:06:03.218975134Z" level=info msg="Container abca325fa7753f85f9b63a38b9906a9b141cef35ab334061f26644a43fa0d2e3: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:06:03.220956 systemd[1]: Started cri-containerd-a6dfd77674379c9c8df0449afa55061287a2bb5b5763dfbdc0de87b5a158f75b.scope - libcontainer container a6dfd77674379c9c8df0449afa55061287a2bb5b5763dfbdc0de87b5a158f75b. Dec 16 02:06:03.228000 audit: BPF prog-id=162 op=LOAD Dec 16 02:06:03.229000 audit: BPF prog-id=163 op=LOAD Dec 16 02:06:03.229000 audit[3755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3743 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646664373736373433373963396338646630343439616661353530 Dec 16 02:06:03.229000 audit: BPF prog-id=163 op=UNLOAD Dec 16 02:06:03.229000 audit[3755]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3743 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646664373736373433373963396338646630343439616661353530 Dec 16 02:06:03.229000 audit: BPF prog-id=164 op=LOAD Dec 16 02:06:03.229000 audit[3755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3743 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646664373736373433373963396338646630343439616661353530 Dec 16 02:06:03.229000 audit: BPF prog-id=165 op=LOAD Dec 16 02:06:03.229000 audit[3755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3743 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646664373736373433373963396338646630343439616661353530 Dec 16 02:06:03.229000 audit: BPF prog-id=165 op=UNLOAD Dec 16 02:06:03.229000 audit[3755]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3743 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646664373736373433373963396338646630343439616661353530 Dec 16 02:06:03.229000 audit: BPF prog-id=164 op=UNLOAD Dec 16 02:06:03.229000 audit[3755]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3743 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646664373736373433373963396338646630343439616661353530 Dec 16 02:06:03.230000 audit: BPF prog-id=166 op=LOAD Dec 16 02:06:03.230000 audit[3755]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3743 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136646664373736373433373963396338646630343439616661353530 Dec 16 02:06:03.244899 containerd[2088]: time="2025-12-16T02:06:03.244821676Z" level=info msg="CreateContainer within sandbox \"50146dcb0e691463262085a8b7644270ba6400363d5233d4f5d9a15f1d0393e8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abca325fa7753f85f9b63a38b9906a9b141cef35ab334061f26644a43fa0d2e3\"" Dec 16 02:06:03.246348 containerd[2088]: time="2025-12-16T02:06:03.245985666Z" level=info msg="StartContainer for \"abca325fa7753f85f9b63a38b9906a9b141cef35ab334061f26644a43fa0d2e3\"" Dec 16 02:06:03.249796 containerd[2088]: time="2025-12-16T02:06:03.249743724Z" level=info msg="connecting to shim abca325fa7753f85f9b63a38b9906a9b141cef35ab334061f26644a43fa0d2e3" address="unix:///run/containerd/s/c7ac86865bf4ee201cafac0ea6d23146ae4bc9391f69df57207237b4544f4580" protocol=ttrpc version=3 Dec 16 02:06:03.262870 containerd[2088]: time="2025-12-16T02:06:03.262724496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zb7cs,Uid:31b7ae7f-ca01-444c-a85a-33f70cdc5716,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a6dfd77674379c9c8df0449afa55061287a2bb5b5763dfbdc0de87b5a158f75b\"" Dec 16 02:06:03.264490 containerd[2088]: time="2025-12-16T02:06:03.264458536Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 02:06:03.267962 systemd[1]: Started cri-containerd-abca325fa7753f85f9b63a38b9906a9b141cef35ab334061f26644a43fa0d2e3.scope - libcontainer container abca325fa7753f85f9b63a38b9906a9b141cef35ab334061f26644a43fa0d2e3. Dec 16 02:06:03.303000 audit: BPF prog-id=167 op=LOAD Dec 16 02:06:03.303000 audit[3776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3699 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162636133323566613737353366383566396236336133386239393036 Dec 16 02:06:03.303000 audit: BPF prog-id=168 op=LOAD Dec 16 02:06:03.303000 audit[3776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3699 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162636133323566613737353366383566396236336133386239393036 Dec 16 02:06:03.303000 audit: BPF prog-id=168 op=UNLOAD Dec 16 02:06:03.303000 audit[3776]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3699 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162636133323566613737353366383566396236336133386239393036 Dec 16 02:06:03.303000 audit: BPF prog-id=167 op=UNLOAD Dec 16 02:06:03.303000 audit[3776]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3699 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162636133323566613737353366383566396236336133386239393036 Dec 16 02:06:03.303000 audit: BPF prog-id=169 op=LOAD Dec 16 02:06:03.303000 audit[3776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3699 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162636133323566613737353366383566396236336133386239393036 Dec 16 02:06:03.327633 containerd[2088]: time="2025-12-16T02:06:03.327539085Z" level=info msg="StartContainer for \"abca325fa7753f85f9b63a38b9906a9b141cef35ab334061f26644a43fa0d2e3\" returns successfully" Dec 16 02:06:03.405000 audit[3844]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.405000 audit[3844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc14bf8a0 a2=0 a3=1 items=0 ppid=3793 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:06:03.405000 audit[3843]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=3843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.405000 audit[3843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeec6d7a0 a2=0 a3=1 items=0 ppid=3793 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:06:03.409000 audit[3847]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.409000 audit[3847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2c3d970 a2=0 a3=1 items=0 ppid=3793 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:06:03.412000 audit[3846]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=3846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.412000 audit[3846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9bc41d0 a2=0 a3=1 items=0 ppid=3793 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:06:03.413000 audit[3850]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=3850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.413000 audit[3850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdee7e6e0 a2=0 a3=1 items=0 ppid=3793 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:06:03.415000 audit[3852]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=3852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.415000 audit[3852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc547f5f0 a2=0 a3=1 items=0 ppid=3793 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:06:03.509000 audit[3853]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.509000 audit[3853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe79e74f0 a2=0 a3=1 items=0 ppid=3793 pid=3853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:06:03.511000 audit[3855]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.511000 audit[3855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd9bfcfa0 a2=0 a3=1 items=0 ppid=3793 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.511000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 02:06:03.515000 audit[3858]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.515000 audit[3858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe7236000 a2=0 a3=1 items=0 ppid=3793 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 02:06:03.516000 audit[3859]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.516000 audit[3859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe11fc380 a2=0 a3=1 items=0 ppid=3793 pid=3859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:06:03.518000 audit[3861]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.518000 audit[3861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff73aac40 a2=0 a3=1 items=0 ppid=3793 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:06:03.519000 audit[3862]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.519000 audit[3862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffaf56820 a2=0 a3=1 items=0 ppid=3793 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.519000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:06:03.521000 audit[3864]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.521000 audit[3864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff5d5e230 a2=0 a3=1 items=0 ppid=3793 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 02:06:03.524000 audit[3867]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.524000 audit[3867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd4afe600 a2=0 a3=1 items=0 ppid=3793 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.524000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 02:06:03.525000 audit[3868]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.525000 audit[3868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc6abf00 a2=0 a3=1 items=0 ppid=3793 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:06:03.527000 audit[3870]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.527000 audit[3870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc3b5a00 a2=0 a3=1 items=0 ppid=3793 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:06:03.528000 audit[3871]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.528000 audit[3871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe53d1d80 a2=0 a3=1 items=0 ppid=3793 pid=3871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.528000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:06:03.530000 audit[3873]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.530000 audit[3873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd0051e20 a2=0 a3=1 items=0 ppid=3793 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.530000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 02:06:03.533000 audit[3876]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.533000 audit[3876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeab55790 a2=0 a3=1 items=0 ppid=3793 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.533000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 02:06:03.536000 audit[3879]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.536000 audit[3879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd6790d20 a2=0 a3=1 items=0 ppid=3793 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.536000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 02:06:03.538000 audit[3880]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.538000 audit[3880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeda34cb0 a2=0 a3=1 items=0 ppid=3793 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.538000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:06:03.540000 audit[3882]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.540000 audit[3882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffc2e4fc0 a2=0 a3=1 items=0 ppid=3793 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.540000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:06:03.543000 audit[3885]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.543000 audit[3885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1ffe280 a2=0 a3=1 items=0 ppid=3793 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.543000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:06:03.544000 audit[3886]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.544000 audit[3886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4038a00 a2=0 a3=1 items=0 ppid=3793 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:06:03.547000 audit[3888]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:06:03.547000 audit[3888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffdb370f90 a2=0 a3=1 items=0 ppid=3793 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:06:03.624000 audit[3894]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:03.624000 audit[3894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff7ce8960 a2=0 a3=1 items=0 ppid=3793 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:03.655000 audit[3894]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:03.655000 audit[3894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff7ce8960 a2=0 a3=1 items=0 ppid=3793 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:03.656000 audit[3899]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.656000 audit[3899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffbb0f720 a2=0 a3=1 items=0 ppid=3793 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.656000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:06:03.659000 audit[3901]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.659000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffed0d5470 a2=0 a3=1 items=0 ppid=3793 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 02:06:03.663000 audit[3904]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.663000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe6ef6340 a2=0 a3=1 items=0 ppid=3793 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 02:06:03.664000 audit[3905]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.664000 audit[3905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4094440 a2=0 a3=1 items=0 ppid=3793 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.664000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:06:03.666000 audit[3907]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.666000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe56387a0 a2=0 a3=1 items=0 ppid=3793 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:06:03.667000 audit[3908]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.667000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4c55f30 a2=0 a3=1 items=0 ppid=3793 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:06:03.669000 audit[3910]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.669000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdcb86b80 a2=0 a3=1 items=0 ppid=3793 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 02:06:03.672000 audit[3913]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.672000 audit[3913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffec8757b0 a2=0 a3=1 items=0 ppid=3793 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 02:06:03.673000 audit[3914]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.673000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1b22500 a2=0 a3=1 items=0 ppid=3793 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:06:03.675000 audit[3916]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.675000 audit[3916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffec242df0 a2=0 a3=1 items=0 ppid=3793 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.675000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:06:03.676000 audit[3917]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.676000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc4e63c0 a2=0 a3=1 items=0 ppid=3793 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:06:03.679000 audit[3919]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.679000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec746b60 a2=0 a3=1 items=0 ppid=3793 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.679000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 02:06:03.682000 audit[3922]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.682000 audit[3922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffcf08370 a2=0 a3=1 items=0 ppid=3793 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 02:06:03.685000 audit[3925]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.685000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4fdf620 a2=0 a3=1 items=0 ppid=3793 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 02:06:03.686000 audit[3926]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.686000 audit[3926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcec1cc10 a2=0 a3=1 items=0 ppid=3793 pid=3926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:06:03.688000 audit[3928]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.688000 audit[3928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe2938660 a2=0 a3=1 items=0 ppid=3793 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:06:03.691000 audit[3931]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.691000 audit[3931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcc0f6c30 a2=0 a3=1 items=0 ppid=3793 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:06:03.693000 audit[3932]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.693000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9f099c0 a2=0 a3=1 items=0 ppid=3793 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.693000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:06:03.695000 audit[3934]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.695000 audit[3934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffed7d5e10 a2=0 a3=1 items=0 ppid=3793 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:06:03.696000 audit[3935]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.696000 audit[3935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1812890 a2=0 a3=1 items=0 ppid=3793 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:06:03.698000 audit[3937]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.698000 audit[3937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffedc40ba0 a2=0 a3=1 items=0 ppid=3793 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:06:03.702000 audit[3940]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:06:03.702000 audit[3940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc2ab4da0 a2=0 a3=1 items=0 ppid=3793 pid=3940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:06:03.705000 audit[3942]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:06:03.705000 audit[3942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff6937030 a2=0 a3=1 items=0 ppid=3793 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.705000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:03.706000 audit[3942]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:06:03.706000 audit[3942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff6937030 a2=0 a3=1 items=0 ppid=3793 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:03.706000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:04.372906 kubelet[3639]: I1216 02:06:04.372588 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lrmdw" podStartSLOduration=2.372403482 podStartE2EDuration="2.372403482s" podCreationTimestamp="2025-12-16 02:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:06:04.330154751 +0000 UTC m=+7.227808527" watchObservedRunningTime="2025-12-16 02:06:04.372403482 +0000 UTC m=+7.270057258" Dec 16 02:06:06.112770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951251514.mount: Deactivated successfully. Dec 16 02:06:08.539599 containerd[2088]: time="2025-12-16T02:06:08.539544403Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:08.543824 containerd[2088]: time="2025-12-16T02:06:08.543764689Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22143261" Dec 16 02:06:08.547364 containerd[2088]: time="2025-12-16T02:06:08.547334649Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:08.552034 containerd[2088]: time="2025-12-16T02:06:08.552002750Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:08.552900 containerd[2088]: time="2025-12-16T02:06:08.552872675Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 5.288381754s" Dec 16 02:06:08.552924 containerd[2088]: time="2025-12-16T02:06:08.552903900Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 02:06:08.561902 containerd[2088]: time="2025-12-16T02:06:08.561870217Z" level=info msg="CreateContainer within sandbox \"a6dfd77674379c9c8df0449afa55061287a2bb5b5763dfbdc0de87b5a158f75b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 02:06:08.588142 containerd[2088]: time="2025-12-16T02:06:08.587751575Z" level=info msg="Container bc29774888e6a5e3fd2b1fd64b78e43bc858a5a84a88f029db67e9cc9a168036: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:06:08.602815 containerd[2088]: time="2025-12-16T02:06:08.602708893Z" level=info msg="CreateContainer within sandbox \"a6dfd77674379c9c8df0449afa55061287a2bb5b5763dfbdc0de87b5a158f75b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bc29774888e6a5e3fd2b1fd64b78e43bc858a5a84a88f029db67e9cc9a168036\"" Dec 16 02:06:08.604676 containerd[2088]: time="2025-12-16T02:06:08.604643510Z" level=info msg="StartContainer for \"bc29774888e6a5e3fd2b1fd64b78e43bc858a5a84a88f029db67e9cc9a168036\"" Dec 16 02:06:08.605527 containerd[2088]: time="2025-12-16T02:06:08.605488795Z" level=info msg="connecting to shim bc29774888e6a5e3fd2b1fd64b78e43bc858a5a84a88f029db67e9cc9a168036" address="unix:///run/containerd/s/1d08f373bfb10f4efe2863cf6a5201e9a362e59a179a88528b1b581318ba5996" protocol=ttrpc version=3 Dec 16 02:06:08.624947 systemd[1]: Started cri-containerd-bc29774888e6a5e3fd2b1fd64b78e43bc858a5a84a88f029db67e9cc9a168036.scope - libcontainer container bc29774888e6a5e3fd2b1fd64b78e43bc858a5a84a88f029db67e9cc9a168036. Dec 16 02:06:08.631000 audit: BPF prog-id=170 op=LOAD Dec 16 02:06:08.635897 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 02:06:08.635962 kernel: audit: type=1334 audit(1765850768.631:540): prog-id=170 op=LOAD Dec 16 02:06:08.638000 audit: BPF prog-id=171 op=LOAD Dec 16 02:06:08.644035 kernel: audit: type=1334 audit(1765850768.638:541): prog-id=171 op=LOAD Dec 16 02:06:08.638000 audit[3953]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.659846 kernel: audit: type=1300 audit(1765850768.638:541): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.676438 kernel: audit: type=1327 audit(1765850768.638:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.676549 kernel: audit: type=1334 audit(1765850768.638:542): prog-id=171 op=UNLOAD Dec 16 02:06:08.638000 audit: BPF prog-id=171 op=UNLOAD Dec 16 02:06:08.638000 audit[3953]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.697284 kernel: audit: type=1300 audit(1765850768.638:542): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.714575 kernel: audit: type=1327 audit(1765850768.638:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.715839 kernel: audit: type=1334 audit(1765850768.638:543): prog-id=172 op=LOAD Dec 16 02:06:08.638000 audit: BPF prog-id=172 op=LOAD Dec 16 02:06:08.638000 audit[3953]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.736242 kernel: audit: type=1300 audit(1765850768.638:543): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.752122 kernel: audit: type=1327 audit(1765850768.638:543): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.642000 audit: BPF prog-id=173 op=LOAD Dec 16 02:06:08.642000 audit[3953]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.658000 audit: BPF prog-id=173 op=UNLOAD Dec 16 02:06:08.658000 audit[3953]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.658000 audit: BPF prog-id=172 op=UNLOAD Dec 16 02:06:08.658000 audit[3953]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.658000 audit: BPF prog-id=174 op=LOAD Dec 16 02:06:08.658000 audit[3953]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3743 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:08.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323937373438383865366135653366643262316664363462373865 Dec 16 02:06:08.774014 containerd[2088]: time="2025-12-16T02:06:08.773982488Z" level=info msg="StartContainer for \"bc29774888e6a5e3fd2b1fd64b78e43bc858a5a84a88f029db67e9cc9a168036\" returns successfully" Dec 16 02:06:09.796720 kubelet[3639]: I1216 02:06:09.796638 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-zb7cs" podStartSLOduration=2.507144648 podStartE2EDuration="7.796613798s" podCreationTimestamp="2025-12-16 02:06:02 +0000 UTC" firstStartedPulling="2025-12-16 02:06:03.264115533 +0000 UTC m=+6.161769309" lastFinishedPulling="2025-12-16 02:06:08.553584683 +0000 UTC m=+11.451238459" observedRunningTime="2025-12-16 02:06:09.337073719 +0000 UTC m=+12.234727527" watchObservedRunningTime="2025-12-16 02:06:09.796613798 +0000 UTC m=+12.694267582" Dec 16 02:06:13.853128 sudo[2595]: pam_unix(sudo:session): session closed for user root Dec 16 02:06:13.851000 audit[2595]: USER_END pid=2595 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:06:13.857037 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 02:06:13.857077 kernel: audit: type=1106 audit(1765850773.851:548): pid=2595 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:06:13.851000 audit[2595]: CRED_DISP pid=2595 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:06:13.886677 kernel: audit: type=1104 audit(1765850773.851:549): pid=2595 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:06:13.948120 sshd[2594]: Connection closed by 10.200.16.10 port 56198 Dec 16 02:06:13.949678 sshd-session[2590]: pam_unix(sshd:session): session closed for user core Dec 16 02:06:13.949000 audit[2590]: USER_END pid=2590 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:06:13.973781 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:56198.service: Deactivated successfully. Dec 16 02:06:13.977596 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 02:06:13.978280 systemd[1]: session-10.scope: Consumed 4.550s CPU time, 223.3M memory peak. Dec 16 02:06:13.981043 systemd-logind[2060]: Session 10 logged out. Waiting for processes to exit. Dec 16 02:06:13.982819 systemd-logind[2060]: Removed session 10. Dec 16 02:06:13.949000 audit[2590]: CRED_DISP pid=2590 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:06:14.000753 kernel: audit: type=1106 audit(1765850773.949:550): pid=2590 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:06:14.000821 kernel: audit: type=1104 audit(1765850773.949:551): pid=2590 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:06:13.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:56198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:06:14.016803 kernel: audit: type=1131 audit(1765850773.973:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:56198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:06:15.056000 audit[4029]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:15.056000 audit[4029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd2801ac0 a2=0 a3=1 items=0 ppid=3793 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:15.086242 kernel: audit: type=1325 audit(1765850775.056:553): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:15.086341 kernel: audit: type=1300 audit(1765850775.056:553): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd2801ac0 a2=0 a3=1 items=0 ppid=3793 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:15.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:15.099477 kernel: audit: type=1327 audit(1765850775.056:553): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:15.100000 audit[4029]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:15.111370 kernel: audit: type=1325 audit(1765850775.100:554): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:15.100000 audit[4029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd2801ac0 a2=0 a3=1 items=0 ppid=3793 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:15.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:15.133828 kernel: audit: type=1300 audit(1765850775.100:554): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd2801ac0 a2=0 a3=1 items=0 ppid=3793 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:15.170000 audit[4031]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:15.170000 audit[4031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffef029e70 a2=0 a3=1 items=0 ppid=3793 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:15.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:15.175000 audit[4031]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:15.175000 audit[4031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffef029e70 a2=0 a3=1 items=0 ppid=3793 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:15.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:19.221000 audit[4033]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:19.229279 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 02:06:19.229362 kernel: audit: type=1325 audit(1765850779.221:557): table=filter:112 family=2 entries=17 op=nft_register_rule pid=4033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:19.221000 audit[4033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff5cb9320 a2=0 a3=1 items=0 ppid=3793 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:19.262811 kernel: audit: type=1300 audit(1765850779.221:557): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff5cb9320 a2=0 a3=1 items=0 ppid=3793 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:19.262894 kernel: audit: type=1327 audit(1765850779.221:557): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:19.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:19.280000 audit[4033]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:19.280000 audit[4033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5cb9320 a2=0 a3=1 items=0 ppid=3793 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:19.312569 kernel: audit: type=1325 audit(1765850779.280:558): table=nat:113 family=2 entries=12 op=nft_register_rule pid=4033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:19.312759 kernel: audit: type=1300 audit(1765850779.280:558): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff5cb9320 a2=0 a3=1 items=0 ppid=3793 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:19.280000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:19.327301 kernel: audit: type=1327 audit(1765850779.280:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:19.332000 audit[4035]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:19.345335 kernel: audit: type=1325 audit(1765850779.332:559): table=filter:114 family=2 entries=18 op=nft_register_rule pid=4035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:19.345414 kernel: audit: type=1300 audit(1765850779.332:559): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff6af67b0 a2=0 a3=1 items=0 ppid=3793 pid=4035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:19.332000 audit[4035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff6af67b0 a2=0 a3=1 items=0 ppid=3793 pid=4035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:19.332000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:19.373470 kernel: audit: type=1327 audit(1765850779.332:559): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:19.373000 audit[4035]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:19.383936 kernel: audit: type=1325 audit(1765850779.373:560): table=nat:115 family=2 entries=12 op=nft_register_rule pid=4035 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:19.373000 audit[4035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff6af67b0 a2=0 a3=1 items=0 ppid=3793 pid=4035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:19.373000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:20.390000 audit[4037]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:20.390000 audit[4037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd4b60710 a2=0 a3=1 items=0 ppid=3793 pid=4037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:20.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:20.395000 audit[4037]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:20.395000 audit[4037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd4b60710 a2=0 a3=1 items=0 ppid=3793 pid=4037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:20.395000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:20.745091 systemd[1]: Created slice kubepods-besteffort-pod0c1b64d4_03a7_4a6f_92a2_f980f7ab66d8.slice - libcontainer container kubepods-besteffort-pod0c1b64d4_03a7_4a6f_92a2_f980f7ab66d8.slice. Dec 16 02:06:20.901985 systemd[1]: Created slice kubepods-besteffort-pod361e269b_93c6_4201_9ecb_58dd9985eceb.slice - libcontainer container kubepods-besteffort-pod361e269b_93c6_4201_9ecb_58dd9985eceb.slice. Dec 16 02:06:20.925318 kubelet[3639]: I1216 02:06:20.925278 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1b64d4-03a7-4a6f-92a2-f980f7ab66d8-tigera-ca-bundle\") pod \"calico-typha-7685ffc944-2dthq\" (UID: \"0c1b64d4-03a7-4a6f-92a2-f980f7ab66d8\") " pod="calico-system/calico-typha-7685ffc944-2dthq" Dec 16 02:06:20.925318 kubelet[3639]: I1216 02:06:20.925315 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0c1b64d4-03a7-4a6f-92a2-f980f7ab66d8-typha-certs\") pod \"calico-typha-7685ffc944-2dthq\" (UID: \"0c1b64d4-03a7-4a6f-92a2-f980f7ab66d8\") " pod="calico-system/calico-typha-7685ffc944-2dthq" Dec 16 02:06:20.925318 kubelet[3639]: I1216 02:06:20.925329 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pfhb\" (UniqueName: \"kubernetes.io/projected/0c1b64d4-03a7-4a6f-92a2-f980f7ab66d8-kube-api-access-2pfhb\") pod \"calico-typha-7685ffc944-2dthq\" (UID: \"0c1b64d4-03a7-4a6f-92a2-f980f7ab66d8\") " pod="calico-system/calico-typha-7685ffc944-2dthq" Dec 16 02:06:21.026247 kubelet[3639]: I1216 02:06:21.026211 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/361e269b-93c6-4201-9ecb-58dd9985eceb-flexvol-driver-host\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026247 kubelet[3639]: I1216 02:06:21.026245 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nkwd\" (UniqueName: \"kubernetes.io/projected/361e269b-93c6-4201-9ecb-58dd9985eceb-kube-api-access-4nkwd\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026415 kubelet[3639]: I1216 02:06:21.026284 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/361e269b-93c6-4201-9ecb-58dd9985eceb-policysync\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026415 kubelet[3639]: I1216 02:06:21.026304 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/361e269b-93c6-4201-9ecb-58dd9985eceb-var-run-calico\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026415 kubelet[3639]: I1216 02:06:21.026315 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/361e269b-93c6-4201-9ecb-58dd9985eceb-xtables-lock\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026415 kubelet[3639]: I1216 02:06:21.026332 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/361e269b-93c6-4201-9ecb-58dd9985eceb-cni-log-dir\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026415 kubelet[3639]: I1216 02:06:21.026341 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/361e269b-93c6-4201-9ecb-58dd9985eceb-lib-modules\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026492 kubelet[3639]: I1216 02:06:21.026350 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/361e269b-93c6-4201-9ecb-58dd9985eceb-node-certs\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026492 kubelet[3639]: I1216 02:06:21.026361 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/361e269b-93c6-4201-9ecb-58dd9985eceb-var-lib-calico\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026492 kubelet[3639]: I1216 02:06:21.026371 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/361e269b-93c6-4201-9ecb-58dd9985eceb-cni-net-dir\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026492 kubelet[3639]: I1216 02:06:21.026379 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/361e269b-93c6-4201-9ecb-58dd9985eceb-tigera-ca-bundle\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.026492 kubelet[3639]: I1216 02:06:21.026391 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/361e269b-93c6-4201-9ecb-58dd9985eceb-cni-bin-dir\") pod \"calico-node-d4x59\" (UID: \"361e269b-93c6-4201-9ecb-58dd9985eceb\") " pod="calico-system/calico-node-d4x59" Dec 16 02:06:21.049125 containerd[2088]: time="2025-12-16T02:06:21.048916486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7685ffc944-2dthq,Uid:0c1b64d4-03a7-4a6f-92a2-f980f7ab66d8,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:21.096276 kubelet[3639]: E1216 02:06:21.096220 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:06:21.117503 containerd[2088]: time="2025-12-16T02:06:21.117152981Z" level=info msg="connecting to shim 9203667696efab23754ebf0a16cb1c78852a20c15ed27ca3b92f71151bd3268f" address="unix:///run/containerd/s/38cef0a5d56f3e071077cf6f5a4d531b183b0fbb1d95a0d1339647491dcfb238" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:06:21.131307 kubelet[3639]: E1216 02:06:21.131280 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.131307 kubelet[3639]: W1216 02:06:21.131300 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.131437 kubelet[3639]: E1216 02:06:21.131405 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.134877 kubelet[3639]: E1216 02:06:21.134807 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.134877 kubelet[3639]: W1216 02:06:21.134823 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.134877 kubelet[3639]: E1216 02:06:21.134835 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.138739 kubelet[3639]: E1216 02:06:21.138300 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.138739 kubelet[3639]: W1216 02:06:21.138316 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.140562 kubelet[3639]: E1216 02:06:21.138822 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.152285 systemd[1]: Started cri-containerd-9203667696efab23754ebf0a16cb1c78852a20c15ed27ca3b92f71151bd3268f.scope - libcontainer container 9203667696efab23754ebf0a16cb1c78852a20c15ed27ca3b92f71151bd3268f. Dec 16 02:06:21.158320 kubelet[3639]: E1216 02:06:21.158295 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.158320 kubelet[3639]: W1216 02:06:21.158313 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.158428 kubelet[3639]: E1216 02:06:21.158336 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.172000 audit: BPF prog-id=175 op=LOAD Dec 16 02:06:21.172000 audit: BPF prog-id=176 op=LOAD Dec 16 02:06:21.172000 audit[4060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4049 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303336363736393665666162323337353465626630613136636231 Dec 16 02:06:21.172000 audit: BPF prog-id=176 op=UNLOAD Dec 16 02:06:21.172000 audit[4060]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4049 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303336363736393665666162323337353465626630613136636231 Dec 16 02:06:21.173000 audit: BPF prog-id=177 op=LOAD Dec 16 02:06:21.173000 audit[4060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4049 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303336363736393665666162323337353465626630613136636231 Dec 16 02:06:21.173000 audit: BPF prog-id=178 op=LOAD Dec 16 02:06:21.173000 audit[4060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4049 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303336363736393665666162323337353465626630613136636231 Dec 16 02:06:21.173000 audit: BPF prog-id=178 op=UNLOAD Dec 16 02:06:21.173000 audit[4060]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4049 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303336363736393665666162323337353465626630613136636231 Dec 16 02:06:21.173000 audit: BPF prog-id=177 op=UNLOAD Dec 16 02:06:21.173000 audit[4060]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4049 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303336363736393665666162323337353465626630613136636231 Dec 16 02:06:21.173000 audit: BPF prog-id=179 op=LOAD Dec 16 02:06:21.173000 audit[4060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4049 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932303336363736393665666162323337353465626630613136636231 Dec 16 02:06:21.204359 containerd[2088]: time="2025-12-16T02:06:21.204323221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7685ffc944-2dthq,Uid:0c1b64d4-03a7-4a6f-92a2-f980f7ab66d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"9203667696efab23754ebf0a16cb1c78852a20c15ed27ca3b92f71151bd3268f\"" Dec 16 02:06:21.206356 containerd[2088]: time="2025-12-16T02:06:21.206130576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d4x59,Uid:361e269b-93c6-4201-9ecb-58dd9985eceb,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:21.208551 containerd[2088]: time="2025-12-16T02:06:21.208519766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 02:06:21.229218 kubelet[3639]: E1216 02:06:21.229112 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.229619 kubelet[3639]: W1216 02:06:21.229386 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.229619 kubelet[3639]: E1216 02:06:21.229411 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.229619 kubelet[3639]: I1216 02:06:21.229438 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8d6bb703-5160-48e3-8477-a1bbde860409-socket-dir\") pod \"csi-node-driver-rmzsh\" (UID: \"8d6bb703-5160-48e3-8477-a1bbde860409\") " pod="calico-system/csi-node-driver-rmzsh" Dec 16 02:06:21.230501 kubelet[3639]: E1216 02:06:21.230229 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.230501 kubelet[3639]: W1216 02:06:21.230243 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.230501 kubelet[3639]: E1216 02:06:21.230273 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.230501 kubelet[3639]: I1216 02:06:21.230292 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw58z\" (UniqueName: \"kubernetes.io/projected/8d6bb703-5160-48e3-8477-a1bbde860409-kube-api-access-fw58z\") pod \"csi-node-driver-rmzsh\" (UID: \"8d6bb703-5160-48e3-8477-a1bbde860409\") " pod="calico-system/csi-node-driver-rmzsh" Dec 16 02:06:21.230501 kubelet[3639]: E1216 02:06:21.230446 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.230501 kubelet[3639]: W1216 02:06:21.230457 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.230501 kubelet[3639]: E1216 02:06:21.230475 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.230652 kubelet[3639]: E1216 02:06:21.230597 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.230652 kubelet[3639]: W1216 02:06:21.230604 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.230652 kubelet[3639]: E1216 02:06:21.230611 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.231194 kubelet[3639]: E1216 02:06:21.230724 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.231194 kubelet[3639]: W1216 02:06:21.230733 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.231194 kubelet[3639]: E1216 02:06:21.230739 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.231194 kubelet[3639]: I1216 02:06:21.230752 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d6bb703-5160-48e3-8477-a1bbde860409-kubelet-dir\") pod \"csi-node-driver-rmzsh\" (UID: \"8d6bb703-5160-48e3-8477-a1bbde860409\") " pod="calico-system/csi-node-driver-rmzsh" Dec 16 02:06:21.231343 kubelet[3639]: E1216 02:06:21.231330 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.231343 kubelet[3639]: W1216 02:06:21.231339 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.231389 kubelet[3639]: E1216 02:06:21.231347 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.231389 kubelet[3639]: I1216 02:06:21.231368 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8d6bb703-5160-48e3-8477-a1bbde860409-varrun\") pod \"csi-node-driver-rmzsh\" (UID: \"8d6bb703-5160-48e3-8477-a1bbde860409\") " pod="calico-system/csi-node-driver-rmzsh" Dec 16 02:06:21.231517 kubelet[3639]: E1216 02:06:21.231499 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.231517 kubelet[3639]: W1216 02:06:21.231509 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.231517 kubelet[3639]: E1216 02:06:21.231516 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.231517 kubelet[3639]: I1216 02:06:21.231546 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8d6bb703-5160-48e3-8477-a1bbde860409-registration-dir\") pod \"csi-node-driver-rmzsh\" (UID: \"8d6bb703-5160-48e3-8477-a1bbde860409\") " pod="calico-system/csi-node-driver-rmzsh" Dec 16 02:06:21.231923 kubelet[3639]: E1216 02:06:21.231628 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.231923 kubelet[3639]: W1216 02:06:21.231634 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.231923 kubelet[3639]: E1216 02:06:21.231639 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.231923 kubelet[3639]: E1216 02:06:21.231735 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.231923 kubelet[3639]: W1216 02:06:21.231740 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.231923 kubelet[3639]: E1216 02:06:21.231754 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.231923 kubelet[3639]: E1216 02:06:21.231862 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.231923 kubelet[3639]: W1216 02:06:21.231867 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.231923 kubelet[3639]: E1216 02:06:21.231880 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.232146 kubelet[3639]: E1216 02:06:21.231970 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.232146 kubelet[3639]: W1216 02:06:21.231974 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.232146 kubelet[3639]: E1216 02:06:21.231979 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.232146 kubelet[3639]: E1216 02:06:21.232064 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.232146 kubelet[3639]: W1216 02:06:21.232069 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.232146 kubelet[3639]: E1216 02:06:21.232074 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.232229 kubelet[3639]: E1216 02:06:21.232162 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.232229 kubelet[3639]: W1216 02:06:21.232173 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.232229 kubelet[3639]: E1216 02:06:21.232178 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.232268 kubelet[3639]: E1216 02:06:21.232249 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.232268 kubelet[3639]: W1216 02:06:21.232253 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.232268 kubelet[3639]: E1216 02:06:21.232257 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.232645 kubelet[3639]: E1216 02:06:21.232340 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.232645 kubelet[3639]: W1216 02:06:21.232348 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.232645 kubelet[3639]: E1216 02:06:21.232352 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.289926 containerd[2088]: time="2025-12-16T02:06:21.289820791Z" level=info msg="connecting to shim 582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336" address="unix:///run/containerd/s/6f64e85e619e0dbe5c2b798be72f51e0d62683a54ba25ebe7d48a1f7e7b375b3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:06:21.309003 systemd[1]: Started cri-containerd-582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336.scope - libcontainer container 582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336. Dec 16 02:06:21.315000 audit: BPF prog-id=180 op=LOAD Dec 16 02:06:21.316000 audit: BPF prog-id=181 op=LOAD Dec 16 02:06:21.316000 audit[4130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4119 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323139386338356234333831336561383533393164643366303461 Dec 16 02:06:21.316000 audit: BPF prog-id=181 op=UNLOAD Dec 16 02:06:21.316000 audit[4130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323139386338356234333831336561383533393164643366303461 Dec 16 02:06:21.316000 audit: BPF prog-id=182 op=LOAD Dec 16 02:06:21.316000 audit[4130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4119 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323139386338356234333831336561383533393164643366303461 Dec 16 02:06:21.316000 audit: BPF prog-id=183 op=LOAD Dec 16 02:06:21.316000 audit[4130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4119 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323139386338356234333831336561383533393164643366303461 Dec 16 02:06:21.316000 audit: BPF prog-id=183 op=UNLOAD Dec 16 02:06:21.316000 audit[4130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323139386338356234333831336561383533393164643366303461 Dec 16 02:06:21.316000 audit: BPF prog-id=182 op=UNLOAD Dec 16 02:06:21.316000 audit[4130]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323139386338356234333831336561383533393164643366303461 Dec 16 02:06:21.316000 audit: BPF prog-id=184 op=LOAD Dec 16 02:06:21.316000 audit[4130]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4119 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323139386338356234333831336561383533393164643366303461 Dec 16 02:06:21.332794 kubelet[3639]: E1216 02:06:21.332711 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.332794 kubelet[3639]: W1216 02:06:21.332731 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.332794 kubelet[3639]: E1216 02:06:21.332751 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.333133 kubelet[3639]: E1216 02:06:21.333111 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.333133 kubelet[3639]: W1216 02:06:21.333128 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.333204 kubelet[3639]: E1216 02:06:21.333139 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.333334 kubelet[3639]: E1216 02:06:21.333320 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.333334 kubelet[3639]: W1216 02:06:21.333328 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.333334 kubelet[3639]: E1216 02:06:21.333335 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.333514 kubelet[3639]: E1216 02:06:21.333499 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.333514 kubelet[3639]: W1216 02:06:21.333508 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.333514 kubelet[3639]: E1216 02:06:21.333514 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.333741 kubelet[3639]: E1216 02:06:21.333726 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.333741 kubelet[3639]: W1216 02:06:21.333739 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.333831 kubelet[3639]: E1216 02:06:21.333746 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.334290 kubelet[3639]: E1216 02:06:21.334160 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.334290 kubelet[3639]: W1216 02:06:21.334176 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.334290 kubelet[3639]: E1216 02:06:21.334185 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.334476 kubelet[3639]: E1216 02:06:21.334463 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.334476 kubelet[3639]: W1216 02:06:21.334473 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.334625 kubelet[3639]: E1216 02:06:21.334482 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.334854 kubelet[3639]: E1216 02:06:21.334844 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.334854 kubelet[3639]: W1216 02:06:21.334853 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.334995 kubelet[3639]: E1216 02:06:21.334860 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.334995 kubelet[3639]: E1216 02:06:21.334987 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.335162 kubelet[3639]: W1216 02:06:21.334998 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.335162 kubelet[3639]: E1216 02:06:21.335004 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.335208 containerd[2088]: time="2025-12-16T02:06:21.334777256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d4x59,Uid:361e269b-93c6-4201-9ecb-58dd9985eceb,Namespace:calico-system,Attempt:0,} returns sandbox id \"582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336\"" Dec 16 02:06:21.335331 kubelet[3639]: E1216 02:06:21.335273 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.335331 kubelet[3639]: W1216 02:06:21.335280 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.335331 kubelet[3639]: E1216 02:06:21.335286 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.335462 kubelet[3639]: E1216 02:06:21.335447 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.335462 kubelet[3639]: W1216 02:06:21.335455 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.335462 kubelet[3639]: E1216 02:06:21.335461 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.335685 kubelet[3639]: E1216 02:06:21.335668 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.335685 kubelet[3639]: W1216 02:06:21.335680 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.335732 kubelet[3639]: E1216 02:06:21.335687 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.335976 kubelet[3639]: E1216 02:06:21.335961 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.335976 kubelet[3639]: W1216 02:06:21.335972 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.336066 kubelet[3639]: E1216 02:06:21.335981 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.336645 kubelet[3639]: E1216 02:06:21.336506 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.336645 kubelet[3639]: W1216 02:06:21.336517 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.336645 kubelet[3639]: E1216 02:06:21.336526 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.337287 kubelet[3639]: E1216 02:06:21.337268 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.337287 kubelet[3639]: W1216 02:06:21.337286 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.337406 kubelet[3639]: E1216 02:06:21.337298 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.337882 kubelet[3639]: E1216 02:06:21.337861 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.337882 kubelet[3639]: W1216 02:06:21.337878 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.337945 kubelet[3639]: E1216 02:06:21.337888 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.338574 kubelet[3639]: E1216 02:06:21.338553 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.338574 kubelet[3639]: W1216 02:06:21.338571 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.338637 kubelet[3639]: E1216 02:06:21.338581 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.339071 kubelet[3639]: E1216 02:06:21.339053 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.339071 kubelet[3639]: W1216 02:06:21.339067 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.339238 kubelet[3639]: E1216 02:06:21.339079 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.339331 kubelet[3639]: E1216 02:06:21.339318 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.339331 kubelet[3639]: W1216 02:06:21.339328 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.339465 kubelet[3639]: E1216 02:06:21.339335 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.339546 kubelet[3639]: E1216 02:06:21.339527 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.339587 kubelet[3639]: W1216 02:06:21.339569 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.339587 kubelet[3639]: E1216 02:06:21.339583 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.339759 kubelet[3639]: E1216 02:06:21.339747 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.339759 kubelet[3639]: W1216 02:06:21.339756 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.339913 kubelet[3639]: E1216 02:06:21.339763 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.339992 kubelet[3639]: E1216 02:06:21.339980 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.339992 kubelet[3639]: W1216 02:06:21.339990 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.340041 kubelet[3639]: E1216 02:06:21.339997 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.340159 kubelet[3639]: E1216 02:06:21.340147 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.340159 kubelet[3639]: W1216 02:06:21.340156 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.340194 kubelet[3639]: E1216 02:06:21.340162 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.340311 kubelet[3639]: E1216 02:06:21.340298 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.340311 kubelet[3639]: W1216 02:06:21.340307 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.340396 kubelet[3639]: E1216 02:06:21.340313 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.340585 kubelet[3639]: E1216 02:06:21.340573 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.340709 kubelet[3639]: W1216 02:06:21.340659 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.340709 kubelet[3639]: E1216 02:06:21.340675 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.351287 kubelet[3639]: E1216 02:06:21.351266 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:21.351287 kubelet[3639]: W1216 02:06:21.351279 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:21.351287 kubelet[3639]: E1216 02:06:21.351297 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:21.404000 audit[4185]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:21.404000 audit[4185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffe381100 a2=0 a3=1 items=0 ppid=3793 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:21.407000 audit[4185]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:21.407000 audit[4185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe381100 a2=0 a3=1 items=0 ppid=3793 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:21.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:22.532519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272123414.mount: Deactivated successfully. Dec 16 02:06:23.184377 containerd[2088]: time="2025-12-16T02:06:23.184331263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:23.191478 containerd[2088]: time="2025-12-16T02:06:23.191433959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 02:06:23.195618 containerd[2088]: time="2025-12-16T02:06:23.195593806Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:23.200959 containerd[2088]: time="2025-12-16T02:06:23.200917388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:23.201515 containerd[2088]: time="2025-12-16T02:06:23.201214205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.992588196s" Dec 16 02:06:23.201515 containerd[2088]: time="2025-12-16T02:06:23.201240262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 02:06:23.203687 containerd[2088]: time="2025-12-16T02:06:23.203631724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 02:06:23.218967 containerd[2088]: time="2025-12-16T02:06:23.218941831Z" level=info msg="CreateContainer within sandbox \"9203667696efab23754ebf0a16cb1c78852a20c15ed27ca3b92f71151bd3268f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 02:06:23.256767 containerd[2088]: time="2025-12-16T02:06:23.256707093Z" level=info msg="Container 6bf946cbd3459850d745ea6f1d802d31ee15d9d718795733f0ff724642cb159a: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:06:23.258474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1893815100.mount: Deactivated successfully. Dec 16 02:06:23.283756 containerd[2088]: time="2025-12-16T02:06:23.283662012Z" level=info msg="CreateContainer within sandbox \"9203667696efab23754ebf0a16cb1c78852a20c15ed27ca3b92f71151bd3268f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6bf946cbd3459850d745ea6f1d802d31ee15d9d718795733f0ff724642cb159a\"" Dec 16 02:06:23.284165 containerd[2088]: time="2025-12-16T02:06:23.284146459Z" level=info msg="StartContainer for \"6bf946cbd3459850d745ea6f1d802d31ee15d9d718795733f0ff724642cb159a\"" Dec 16 02:06:23.285463 containerd[2088]: time="2025-12-16T02:06:23.285428853Z" level=info msg="connecting to shim 6bf946cbd3459850d745ea6f1d802d31ee15d9d718795733f0ff724642cb159a" address="unix:///run/containerd/s/38cef0a5d56f3e071077cf6f5a4d531b183b0fbb1d95a0d1339647491dcfb238" protocol=ttrpc version=3 Dec 16 02:06:23.291649 kubelet[3639]: E1216 02:06:23.291095 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:06:23.304950 systemd[1]: Started cri-containerd-6bf946cbd3459850d745ea6f1d802d31ee15d9d718795733f0ff724642cb159a.scope - libcontainer container 6bf946cbd3459850d745ea6f1d802d31ee15d9d718795733f0ff724642cb159a. Dec 16 02:06:23.313000 audit: BPF prog-id=185 op=LOAD Dec 16 02:06:23.313000 audit: BPF prog-id=186 op=LOAD Dec 16 02:06:23.313000 audit[4197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4049 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:23.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663934366362643334353938353064373435656136663164383032 Dec 16 02:06:23.313000 audit: BPF prog-id=186 op=UNLOAD Dec 16 02:06:23.313000 audit[4197]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4049 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:23.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663934366362643334353938353064373435656136663164383032 Dec 16 02:06:23.313000 audit: BPF prog-id=187 op=LOAD Dec 16 02:06:23.313000 audit[4197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4049 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:23.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663934366362643334353938353064373435656136663164383032 Dec 16 02:06:23.314000 audit: BPF prog-id=188 op=LOAD Dec 16 02:06:23.314000 audit[4197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4049 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:23.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663934366362643334353938353064373435656136663164383032 Dec 16 02:06:23.314000 audit: BPF prog-id=188 op=UNLOAD Dec 16 02:06:23.314000 audit[4197]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4049 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:23.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663934366362643334353938353064373435656136663164383032 Dec 16 02:06:23.314000 audit: BPF prog-id=187 op=UNLOAD Dec 16 02:06:23.314000 audit[4197]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4049 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:23.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663934366362643334353938353064373435656136663164383032 Dec 16 02:06:23.314000 audit: BPF prog-id=189 op=LOAD Dec 16 02:06:23.314000 audit[4197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4049 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:23.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662663934366362643334353938353064373435656136663164383032 Dec 16 02:06:23.343030 containerd[2088]: time="2025-12-16T02:06:23.342991385Z" level=info msg="StartContainer for \"6bf946cbd3459850d745ea6f1d802d31ee15d9d718795733f0ff724642cb159a\" returns successfully" Dec 16 02:06:23.444480 kubelet[3639]: E1216 02:06:23.444369 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.444625 kubelet[3639]: W1216 02:06:23.444610 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.444722 kubelet[3639]: E1216 02:06:23.444711 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.445428 kubelet[3639]: E1216 02:06:23.445414 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.445772 kubelet[3639]: W1216 02:06:23.445503 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.445772 kubelet[3639]: E1216 02:06:23.445539 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.446274 kubelet[3639]: E1216 02:06:23.446258 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.446396 kubelet[3639]: W1216 02:06:23.446305 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.446396 kubelet[3639]: E1216 02:06:23.446318 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.447055 kubelet[3639]: E1216 02:06:23.446953 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.447055 kubelet[3639]: W1216 02:06:23.446968 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.447055 kubelet[3639]: E1216 02:06:23.446978 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.447227 kubelet[3639]: E1216 02:06:23.447184 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.447227 kubelet[3639]: W1216 02:06:23.447194 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.447227 kubelet[3639]: E1216 02:06:23.447203 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.447822 kubelet[3639]: E1216 02:06:23.447438 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.447822 kubelet[3639]: W1216 02:06:23.447448 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.447822 kubelet[3639]: E1216 02:06:23.447456 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.448181 kubelet[3639]: E1216 02:06:23.448119 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.448181 kubelet[3639]: W1216 02:06:23.448131 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.448181 kubelet[3639]: E1216 02:06:23.448141 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.449987 kubelet[3639]: E1216 02:06:23.449859 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.449987 kubelet[3639]: W1216 02:06:23.449874 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.449987 kubelet[3639]: E1216 02:06:23.449884 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.450131 kubelet[3639]: E1216 02:06:23.450121 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.450185 kubelet[3639]: W1216 02:06:23.450177 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.450247 kubelet[3639]: E1216 02:06:23.450228 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.450464 kubelet[3639]: E1216 02:06:23.450415 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.450464 kubelet[3639]: W1216 02:06:23.450425 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.450464 kubelet[3639]: E1216 02:06:23.450435 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.450738 kubelet[3639]: E1216 02:06:23.450688 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.450738 kubelet[3639]: W1216 02:06:23.450698 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.450738 kubelet[3639]: E1216 02:06:23.450707 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.451033 kubelet[3639]: E1216 02:06:23.450975 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.451033 kubelet[3639]: W1216 02:06:23.450984 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.451033 kubelet[3639]: E1216 02:06:23.450993 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.451298 kubelet[3639]: E1216 02:06:23.451247 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.451298 kubelet[3639]: W1216 02:06:23.451257 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.451298 kubelet[3639]: E1216 02:06:23.451266 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.451628 kubelet[3639]: E1216 02:06:23.451490 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.451628 kubelet[3639]: W1216 02:06:23.451574 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.451628 kubelet[3639]: E1216 02:06:23.451589 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.452170 kubelet[3639]: E1216 02:06:23.452157 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.452361 kubelet[3639]: W1216 02:06:23.452231 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.452361 kubelet[3639]: E1216 02:06:23.452246 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.452463 kubelet[3639]: E1216 02:06:23.452440 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.452463 kubelet[3639]: W1216 02:06:23.452454 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.452463 kubelet[3639]: E1216 02:06:23.452464 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.453126 kubelet[3639]: E1216 02:06:23.453112 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.453126 kubelet[3639]: W1216 02:06:23.453122 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.453235 kubelet[3639]: E1216 02:06:23.453131 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.453335 kubelet[3639]: E1216 02:06:23.453323 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.453335 kubelet[3639]: W1216 02:06:23.453332 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.453431 kubelet[3639]: E1216 02:06:23.453339 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.453606 kubelet[3639]: E1216 02:06:23.453594 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.453606 kubelet[3639]: W1216 02:06:23.453603 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.453872 kubelet[3639]: E1216 02:06:23.453610 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.453872 kubelet[3639]: E1216 02:06:23.453851 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.453872 kubelet[3639]: W1216 02:06:23.453861 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.453935 kubelet[3639]: E1216 02:06:23.453910 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.454127 kubelet[3639]: E1216 02:06:23.454114 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.454127 kubelet[3639]: W1216 02:06:23.454123 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.455126 kubelet[3639]: E1216 02:06:23.454131 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.455126 kubelet[3639]: E1216 02:06:23.454338 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.455126 kubelet[3639]: W1216 02:06:23.454345 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.455126 kubelet[3639]: E1216 02:06:23.454352 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.455126 kubelet[3639]: E1216 02:06:23.454818 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.455126 kubelet[3639]: W1216 02:06:23.454827 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.455126 kubelet[3639]: E1216 02:06:23.454836 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.455126 kubelet[3639]: E1216 02:06:23.455031 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.455126 kubelet[3639]: W1216 02:06:23.455038 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.455126 kubelet[3639]: E1216 02:06:23.455045 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.455320 kubelet[3639]: E1216 02:06:23.455264 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.455320 kubelet[3639]: W1216 02:06:23.455274 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.455320 kubelet[3639]: E1216 02:06:23.455282 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.455478 kubelet[3639]: E1216 02:06:23.455467 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.455478 kubelet[3639]: W1216 02:06:23.455476 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.455558 kubelet[3639]: E1216 02:06:23.455483 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.455672 kubelet[3639]: E1216 02:06:23.455662 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.455672 kubelet[3639]: W1216 02:06:23.455670 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.455751 kubelet[3639]: E1216 02:06:23.455677 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.455850 kubelet[3639]: E1216 02:06:23.455838 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.455850 kubelet[3639]: W1216 02:06:23.455846 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.455909 kubelet[3639]: E1216 02:06:23.455853 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.456062 kubelet[3639]: E1216 02:06:23.456048 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.456062 kubelet[3639]: W1216 02:06:23.456060 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.456131 kubelet[3639]: E1216 02:06:23.456070 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.456801 kubelet[3639]: E1216 02:06:23.456627 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.456801 kubelet[3639]: W1216 02:06:23.456676 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.456801 kubelet[3639]: E1216 02:06:23.456689 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.457330 kubelet[3639]: E1216 02:06:23.457176 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.457330 kubelet[3639]: W1216 02:06:23.457189 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.457330 kubelet[3639]: E1216 02:06:23.457199 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.457976 kubelet[3639]: E1216 02:06:23.457509 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.457976 kubelet[3639]: W1216 02:06:23.457524 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.457976 kubelet[3639]: E1216 02:06:23.457535 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:23.459701 kubelet[3639]: E1216 02:06:23.458267 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:23.459863 kubelet[3639]: W1216 02:06:23.459808 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:23.459863 kubelet[3639]: E1216 02:06:23.459838 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.361573 kubelet[3639]: I1216 02:06:24.361409 3639 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 02:06:24.459632 kubelet[3639]: E1216 02:06:24.459392 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.459632 kubelet[3639]: W1216 02:06:24.459626 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.459867 kubelet[3639]: E1216 02:06:24.459648 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.460050 kubelet[3639]: E1216 02:06:24.460015 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.460050 kubelet[3639]: W1216 02:06:24.460028 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.460221 kubelet[3639]: E1216 02:06:24.460038 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.460485 kubelet[3639]: E1216 02:06:24.460470 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.460485 kubelet[3639]: W1216 02:06:24.460482 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.460542 kubelet[3639]: E1216 02:06:24.460492 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.460711 kubelet[3639]: E1216 02:06:24.460696 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.460711 kubelet[3639]: W1216 02:06:24.460709 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.460767 kubelet[3639]: E1216 02:06:24.460721 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.461069 kubelet[3639]: E1216 02:06:24.461049 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.461069 kubelet[3639]: W1216 02:06:24.461064 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.461129 kubelet[3639]: E1216 02:06:24.461074 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.461456 kubelet[3639]: E1216 02:06:24.461438 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.461456 kubelet[3639]: W1216 02:06:24.461451 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.461519 kubelet[3639]: E1216 02:06:24.461460 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.461690 kubelet[3639]: E1216 02:06:24.461674 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.461690 kubelet[3639]: W1216 02:06:24.461686 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.461848 kubelet[3639]: E1216 02:06:24.461695 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.461990 kubelet[3639]: E1216 02:06:24.461975 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.461990 kubelet[3639]: W1216 02:06:24.461988 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.462037 kubelet[3639]: E1216 02:06:24.461998 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.462262 kubelet[3639]: E1216 02:06:24.462246 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.462262 kubelet[3639]: W1216 02:06:24.462260 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.462326 kubelet[3639]: E1216 02:06:24.462269 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.462562 kubelet[3639]: E1216 02:06:24.462546 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.462562 kubelet[3639]: W1216 02:06:24.462559 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.462619 kubelet[3639]: E1216 02:06:24.462569 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.463006 kubelet[3639]: E1216 02:06:24.462991 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.463006 kubelet[3639]: W1216 02:06:24.463003 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.463142 kubelet[3639]: E1216 02:06:24.463012 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.463397 kubelet[3639]: E1216 02:06:24.463380 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.463397 kubelet[3639]: W1216 02:06:24.463394 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.463455 kubelet[3639]: E1216 02:06:24.463404 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.463928 kubelet[3639]: E1216 02:06:24.463911 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.463928 kubelet[3639]: W1216 02:06:24.463925 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.464053 kubelet[3639]: E1216 02:06:24.463935 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.464233 kubelet[3639]: E1216 02:06:24.464217 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.464233 kubelet[3639]: W1216 02:06:24.464231 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.464286 kubelet[3639]: E1216 02:06:24.464240 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.464417 kubelet[3639]: E1216 02:06:24.464403 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.464417 kubelet[3639]: W1216 02:06:24.464415 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.464466 kubelet[3639]: E1216 02:06:24.464424 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.464875 kubelet[3639]: E1216 02:06:24.464863 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.465013 kubelet[3639]: W1216 02:06:24.464938 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.465148 kubelet[3639]: E1216 02:06:24.465077 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.465462 kubelet[3639]: E1216 02:06:24.465383 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.465462 kubelet[3639]: W1216 02:06:24.465394 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.465462 kubelet[3639]: E1216 02:06:24.465403 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.465931 kubelet[3639]: E1216 02:06:24.465782 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.465931 kubelet[3639]: W1216 02:06:24.465807 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.465931 kubelet[3639]: E1216 02:06:24.465817 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.466242 kubelet[3639]: E1216 02:06:24.466210 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.466242 kubelet[3639]: W1216 02:06:24.466222 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.466242 kubelet[3639]: E1216 02:06:24.466233 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.466744 kubelet[3639]: E1216 02:06:24.466696 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.466744 kubelet[3639]: W1216 02:06:24.466724 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.466744 kubelet[3639]: E1216 02:06:24.466734 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.468111 kubelet[3639]: E1216 02:06:24.467991 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.468111 kubelet[3639]: W1216 02:06:24.468007 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.468111 kubelet[3639]: E1216 02:06:24.468016 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.468381 kubelet[3639]: E1216 02:06:24.468264 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.468381 kubelet[3639]: W1216 02:06:24.468275 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.468381 kubelet[3639]: E1216 02:06:24.468285 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.468635 kubelet[3639]: E1216 02:06:24.468512 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.468635 kubelet[3639]: W1216 02:06:24.468522 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.468635 kubelet[3639]: E1216 02:06:24.468531 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.468843 kubelet[3639]: E1216 02:06:24.468772 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.468843 kubelet[3639]: W1216 02:06:24.468782 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.468843 kubelet[3639]: E1216 02:06:24.468833 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.469152 kubelet[3639]: E1216 02:06:24.469141 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.469700 kubelet[3639]: W1216 02:06:24.469429 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.469700 kubelet[3639]: E1216 02:06:24.469446 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.469961 kubelet[3639]: E1216 02:06:24.469948 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.470039 kubelet[3639]: W1216 02:06:24.470030 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.470166 kubelet[3639]: E1216 02:06:24.470074 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.470683 kubelet[3639]: E1216 02:06:24.470671 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.471121 kubelet[3639]: W1216 02:06:24.471103 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.471487 kubelet[3639]: E1216 02:06:24.471193 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.472765 kubelet[3639]: E1216 02:06:24.472640 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.472765 kubelet[3639]: W1216 02:06:24.472654 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.472765 kubelet[3639]: E1216 02:06:24.472665 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.473564 kubelet[3639]: E1216 02:06:24.473535 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.473564 kubelet[3639]: W1216 02:06:24.473548 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.473724 kubelet[3639]: E1216 02:06:24.473557 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.474396 kubelet[3639]: E1216 02:06:24.474324 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.474396 kubelet[3639]: W1216 02:06:24.474336 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.474396 kubelet[3639]: E1216 02:06:24.474345 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.475417 kubelet[3639]: E1216 02:06:24.475253 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.475417 kubelet[3639]: W1216 02:06:24.475266 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.475417 kubelet[3639]: E1216 02:06:24.475279 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.476381 kubelet[3639]: E1216 02:06:24.476204 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.476381 kubelet[3639]: W1216 02:06:24.476217 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.476381 kubelet[3639]: E1216 02:06:24.476226 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.477212 kubelet[3639]: E1216 02:06:24.477137 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:06:24.477330 kubelet[3639]: W1216 02:06:24.477292 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:06:24.477330 kubelet[3639]: E1216 02:06:24.477310 3639 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:06:24.531590 containerd[2088]: time="2025-12-16T02:06:24.531551304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:24.535326 containerd[2088]: time="2025-12-16T02:06:24.535281361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 02:06:24.538819 containerd[2088]: time="2025-12-16T02:06:24.538781891Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:24.547502 containerd[2088]: time="2025-12-16T02:06:24.547457806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:24.547932 containerd[2088]: time="2025-12-16T02:06:24.547760336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.344099243s" Dec 16 02:06:24.547932 containerd[2088]: time="2025-12-16T02:06:24.547801065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 02:06:24.559325 containerd[2088]: time="2025-12-16T02:06:24.559294528Z" level=info msg="CreateContainer within sandbox \"582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 02:06:24.592674 containerd[2088]: time="2025-12-16T02:06:24.590655557Z" level=info msg="Container 83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:06:24.592306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281619549.mount: Deactivated successfully. Dec 16 02:06:24.615197 containerd[2088]: time="2025-12-16T02:06:24.615094682Z" level=info msg="CreateContainer within sandbox \"582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a\"" Dec 16 02:06:24.616531 containerd[2088]: time="2025-12-16T02:06:24.616504472Z" level=info msg="StartContainer for \"83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a\"" Dec 16 02:06:24.617906 containerd[2088]: time="2025-12-16T02:06:24.617882420Z" level=info msg="connecting to shim 83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a" address="unix:///run/containerd/s/6f64e85e619e0dbe5c2b798be72f51e0d62683a54ba25ebe7d48a1f7e7b375b3" protocol=ttrpc version=3 Dec 16 02:06:24.637959 systemd[1]: Started cri-containerd-83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a.scope - libcontainer container 83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a. Dec 16 02:06:24.685000 audit: BPF prog-id=190 op=LOAD Dec 16 02:06:24.690051 kernel: kauditd_printk_skb: 80 callbacks suppressed Dec 16 02:06:24.685000 audit[4305]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4119 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:24.710498 kernel: audit: type=1334 audit(1765850784.685:589): prog-id=190 op=LOAD Dec 16 02:06:24.710570 kernel: audit: type=1300 audit(1765850784.685:589): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4119 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:24.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343437363638613231363466303330376337393264306136646136 Dec 16 02:06:24.726108 kernel: audit: type=1327 audit(1765850784.685:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343437363638613231363466303330376337393264306136646136 Dec 16 02:06:24.689000 audit: BPF prog-id=191 op=LOAD Dec 16 02:06:24.733138 kernel: audit: type=1334 audit(1765850784.689:590): prog-id=191 op=LOAD Dec 16 02:06:24.689000 audit[4305]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4119 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:24.749873 kernel: audit: type=1300 audit(1765850784.689:590): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4119 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:24.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343437363638613231363466303330376337393264306136646136 Dec 16 02:06:24.766188 kernel: audit: type=1327 audit(1765850784.689:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343437363638613231363466303330376337393264306136646136 Dec 16 02:06:24.689000 audit: BPF prog-id=191 op=UNLOAD Dec 16 02:06:24.771521 kernel: audit: type=1334 audit(1765850784.689:591): prog-id=191 op=UNLOAD Dec 16 02:06:24.689000 audit[4305]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:24.788355 kernel: audit: type=1300 audit(1765850784.689:591): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:24.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343437363638613231363466303330376337393264306136646136 Dec 16 02:06:24.805291 kernel: audit: type=1327 audit(1765850784.689:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343437363638613231363466303330376337393264306136646136 Dec 16 02:06:24.689000 audit: BPF prog-id=190 op=UNLOAD Dec 16 02:06:24.807263 systemd[1]: cri-containerd-83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a.scope: Deactivated successfully. Dec 16 02:06:24.810246 kernel: audit: type=1334 audit(1765850784.689:592): prog-id=190 op=UNLOAD Dec 16 02:06:24.689000 audit[4305]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:24.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343437363638613231363466303330376337393264306136646136 Dec 16 02:06:24.689000 audit: BPF prog-id=192 op=LOAD Dec 16 02:06:24.689000 audit[4305]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4119 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:24.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343437363638613231363466303330376337393264306136646136 Dec 16 02:06:24.811000 audit: BPF prog-id=192 op=UNLOAD Dec 16 02:06:24.816674 containerd[2088]: time="2025-12-16T02:06:24.816644441Z" level=info msg="StartContainer for \"83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a\" returns successfully" Dec 16 02:06:24.819605 containerd[2088]: time="2025-12-16T02:06:24.819482421Z" level=info msg="received container exit event container_id:\"83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a\" id:\"83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a\" pid:4317 exited_at:{seconds:1765850784 nanos:818050167}" Dec 16 02:06:24.836546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83447668a2164f0307c792d0a6da626b0b7e3ba418ecab7675cadb0ce1f9438a-rootfs.mount: Deactivated successfully. Dec 16 02:06:25.313850 kubelet[3639]: E1216 02:06:25.289461 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:06:25.385641 kubelet[3639]: I1216 02:06:25.385555 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7685ffc944-2dthq" podStartSLOduration=3.3903670249999998 podStartE2EDuration="5.385539641s" podCreationTimestamp="2025-12-16 02:06:20 +0000 UTC" firstStartedPulling="2025-12-16 02:06:21.206735364 +0000 UTC m=+24.104389140" lastFinishedPulling="2025-12-16 02:06:23.20190798 +0000 UTC m=+26.099561756" observedRunningTime="2025-12-16 02:06:23.384758034 +0000 UTC m=+26.282411874" watchObservedRunningTime="2025-12-16 02:06:25.385539641 +0000 UTC m=+28.283193417" Dec 16 02:06:26.369509 containerd[2088]: time="2025-12-16T02:06:26.369372554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 02:06:27.289947 kubelet[3639]: E1216 02:06:27.289619 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:06:29.289984 kubelet[3639]: E1216 02:06:29.289840 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:06:29.380496 kubelet[3639]: I1216 02:06:29.380444 3639 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 02:06:29.404000 audit[4360]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:29.404000 audit[4360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc5fc5710 a2=0 a3=1 items=0 ppid=3793 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:29.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:29.410000 audit[4360]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:29.410000 audit[4360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc5fc5710 a2=0 a3=1 items=0 ppid=3793 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:29.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:29.447727 containerd[2088]: time="2025-12-16T02:06:29.447680679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:29.451388 containerd[2088]: time="2025-12-16T02:06:29.451345071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 02:06:29.456215 containerd[2088]: time="2025-12-16T02:06:29.456188262Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:29.461316 containerd[2088]: time="2025-12-16T02:06:29.461285853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:29.461800 containerd[2088]: time="2025-12-16T02:06:29.461538581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.092117122s" Dec 16 02:06:29.461800 containerd[2088]: time="2025-12-16T02:06:29.461559502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 02:06:29.469527 containerd[2088]: time="2025-12-16T02:06:29.469497418Z" level=info msg="CreateContainer within sandbox \"582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 02:06:29.494641 containerd[2088]: time="2025-12-16T02:06:29.493915426Z" level=info msg="Container ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:06:29.515175 containerd[2088]: time="2025-12-16T02:06:29.515143553Z" level=info msg="CreateContainer within sandbox \"582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e\"" Dec 16 02:06:29.515905 containerd[2088]: time="2025-12-16T02:06:29.515882217Z" level=info msg="StartContainer for \"ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e\"" Dec 16 02:06:29.517202 containerd[2088]: time="2025-12-16T02:06:29.517168964Z" level=info msg="connecting to shim ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e" address="unix:///run/containerd/s/6f64e85e619e0dbe5c2b798be72f51e0d62683a54ba25ebe7d48a1f7e7b375b3" protocol=ttrpc version=3 Dec 16 02:06:29.539937 systemd[1]: Started cri-containerd-ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e.scope - libcontainer container ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e. Dec 16 02:06:29.589000 audit: BPF prog-id=193 op=LOAD Dec 16 02:06:29.589000 audit[4365]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4119 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:29.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361366134666532363334656164663635643631613366656432393862 Dec 16 02:06:29.589000 audit: BPF prog-id=194 op=LOAD Dec 16 02:06:29.589000 audit[4365]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4119 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:29.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361366134666532363334656164663635643631613366656432393862 Dec 16 02:06:29.590000 audit: BPF prog-id=194 op=UNLOAD Dec 16 02:06:29.590000 audit[4365]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:29.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361366134666532363334656164663635643631613366656432393862 Dec 16 02:06:29.590000 audit: BPF prog-id=193 op=UNLOAD Dec 16 02:06:29.590000 audit[4365]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:29.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361366134666532363334656164663635643631613366656432393862 Dec 16 02:06:29.590000 audit: BPF prog-id=195 op=LOAD Dec 16 02:06:29.590000 audit[4365]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4119 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:29.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361366134666532363334656164663635643631613366656432393862 Dec 16 02:06:29.612499 containerd[2088]: time="2025-12-16T02:06:29.612445670Z" level=info msg="StartContainer for \"ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e\" returns successfully" Dec 16 02:06:30.707658 systemd[1]: cri-containerd-ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e.scope: Deactivated successfully. Dec 16 02:06:30.708234 systemd[1]: cri-containerd-ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e.scope: Consumed 309ms CPU time, 187.2M memory peak, 165.9M written to disk. Dec 16 02:06:30.709235 containerd[2088]: time="2025-12-16T02:06:30.709100789Z" level=info msg="received container exit event container_id:\"ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e\" id:\"ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e\" pid:4377 exited_at:{seconds:1765850790 nanos:708754762}" Dec 16 02:06:30.717806 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 02:06:30.717892 kernel: audit: type=1334 audit(1765850790.712:602): prog-id=195 op=UNLOAD Dec 16 02:06:30.712000 audit: BPF prog-id=195 op=UNLOAD Dec 16 02:06:30.729692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca6a4fe2634eadf65d61a3fed298b848ebc2ced1b514bdb153aaec7ef50d094e-rootfs.mount: Deactivated successfully. Dec 16 02:06:30.774429 kubelet[3639]: I1216 02:06:30.773851 3639 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 02:06:31.628410 systemd[1]: Created slice kubepods-burstable-podef9d7f1f_9be7_4bfe_b6f9_f394238ebde1.slice - libcontainer container kubepods-burstable-podef9d7f1f_9be7_4bfe_b6f9_f394238ebde1.slice. Dec 16 02:06:31.635474 systemd[1]: Created slice kubepods-besteffort-pod8d6bb703_5160_48e3_8477_a1bbde860409.slice - libcontainer container kubepods-besteffort-pod8d6bb703_5160_48e3_8477_a1bbde860409.slice. Dec 16 02:06:31.641362 containerd[2088]: time="2025-12-16T02:06:31.641012463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmzsh,Uid:8d6bb703-5160-48e3-8477-a1bbde860409,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:31.647835 systemd[1]: Created slice kubepods-burstable-pod7c04891e_a2dd_497f_9457_bad48f5fedbe.slice - libcontainer container kubepods-burstable-pod7c04891e_a2dd_497f_9457_bad48f5fedbe.slice. Dec 16 02:06:31.663912 systemd[1]: Created slice kubepods-besteffort-pod222ba326_59d2_4676_b30c_82b655f93a5f.slice - libcontainer container kubepods-besteffort-pod222ba326_59d2_4676_b30c_82b655f93a5f.slice. Dec 16 02:06:31.671307 systemd[1]: Created slice kubepods-besteffort-podf469a490_fb18_4868_b583_cd075b9a892c.slice - libcontainer container kubepods-besteffort-podf469a490_fb18_4868_b583_cd075b9a892c.slice. Dec 16 02:06:31.685621 systemd[1]: Created slice kubepods-besteffort-pod66b06eca_07d1_44eb_b416_c4ad8a434e78.slice - libcontainer container kubepods-besteffort-pod66b06eca_07d1_44eb_b416_c4ad8a434e78.slice. Dec 16 02:06:31.702414 systemd[1]: Created slice kubepods-besteffort-pode320a820_257c_48b1_85de_c1dd7b465c9a.slice - libcontainer container kubepods-besteffort-pode320a820_257c_48b1_85de_c1dd7b465c9a.slice. Dec 16 02:06:31.710365 systemd[1]: Created slice kubepods-besteffort-pod5f7514dc_37b5_4bac_8d4e_04c87fb3f679.slice - libcontainer container kubepods-besteffort-pod5f7514dc_37b5_4bac_8d4e_04c87fb3f679.slice. Dec 16 02:06:31.714816 kubelet[3639]: I1216 02:06:31.713580 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f469a490-fb18-4868-b583-cd075b9a892c-goldmane-ca-bundle\") pod \"goldmane-666569f655-s52v5\" (UID: \"f469a490-fb18-4868-b583-cd075b9a892c\") " pod="calico-system/goldmane-666569f655-s52v5" Dec 16 02:06:31.714816 kubelet[3639]: I1216 02:06:31.713616 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f7514dc-37b5-4bac-8d4e-04c87fb3f679-calico-apiserver-certs\") pod \"calico-apiserver-69677bd74f-b42tq\" (UID: \"5f7514dc-37b5-4bac-8d4e-04c87fb3f679\") " pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" Dec 16 02:06:31.714816 kubelet[3639]: I1216 02:06:31.713629 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c04891e-a2dd-497f-9457-bad48f5fedbe-config-volume\") pod \"coredns-674b8bbfcf-vsht2\" (UID: \"7c04891e-a2dd-497f-9457-bad48f5fedbe\") " pod="kube-system/coredns-674b8bbfcf-vsht2" Dec 16 02:06:31.714816 kubelet[3639]: I1216 02:06:31.713658 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qptxg\" (UniqueName: \"kubernetes.io/projected/7c04891e-a2dd-497f-9457-bad48f5fedbe-kube-api-access-qptxg\") pod \"coredns-674b8bbfcf-vsht2\" (UID: \"7c04891e-a2dd-497f-9457-bad48f5fedbe\") " pod="kube-system/coredns-674b8bbfcf-vsht2" Dec 16 02:06:31.714816 kubelet[3639]: I1216 02:06:31.713670 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f469a490-fb18-4868-b583-cd075b9a892c-goldmane-key-pair\") pod \"goldmane-666569f655-s52v5\" (UID: \"f469a490-fb18-4868-b583-cd075b9a892c\") " pod="calico-system/goldmane-666569f655-s52v5" Dec 16 02:06:31.714982 kubelet[3639]: I1216 02:06:31.713679 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/222ba326-59d2-4676-b30c-82b655f93a5f-tigera-ca-bundle\") pod \"calico-kube-controllers-549bfc7bd9-vst6l\" (UID: \"222ba326-59d2-4676-b30c-82b655f93a5f\") " pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" Dec 16 02:06:31.714982 kubelet[3639]: I1216 02:06:31.713709 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cxzh\" (UniqueName: \"kubernetes.io/projected/ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1-kube-api-access-4cxzh\") pod \"coredns-674b8bbfcf-dpg5v\" (UID: \"ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1\") " pod="kube-system/coredns-674b8bbfcf-dpg5v" Dec 16 02:06:31.714982 kubelet[3639]: I1216 02:06:31.713732 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2b6w\" (UniqueName: \"kubernetes.io/projected/222ba326-59d2-4676-b30c-82b655f93a5f-kube-api-access-p2b6w\") pod \"calico-kube-controllers-549bfc7bd9-vst6l\" (UID: \"222ba326-59d2-4676-b30c-82b655f93a5f\") " pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" Dec 16 02:06:31.714982 kubelet[3639]: I1216 02:06:31.713748 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25gj\" (UniqueName: \"kubernetes.io/projected/f469a490-fb18-4868-b583-cd075b9a892c-kube-api-access-m25gj\") pod \"goldmane-666569f655-s52v5\" (UID: \"f469a490-fb18-4868-b583-cd075b9a892c\") " pod="calico-system/goldmane-666569f655-s52v5" Dec 16 02:06:31.714982 kubelet[3639]: I1216 02:06:31.713757 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl6qm\" (UniqueName: \"kubernetes.io/projected/e320a820-257c-48b1-85de-c1dd7b465c9a-kube-api-access-hl6qm\") pod \"calico-apiserver-69677bd74f-gv2ff\" (UID: \"e320a820-257c-48b1-85de-c1dd7b465c9a\") " pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" Dec 16 02:06:31.715060 kubelet[3639]: I1216 02:06:31.713766 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1-config-volume\") pod \"coredns-674b8bbfcf-dpg5v\" (UID: \"ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1\") " pod="kube-system/coredns-674b8bbfcf-dpg5v" Dec 16 02:06:31.715060 kubelet[3639]: I1216 02:06:31.713778 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66b06eca-07d1-44eb-b416-c4ad8a434e78-whisker-backend-key-pair\") pod \"whisker-7489f8f868-8xx2k\" (UID: \"66b06eca-07d1-44eb-b416-c4ad8a434e78\") " pod="calico-system/whisker-7489f8f868-8xx2k" Dec 16 02:06:31.715060 kubelet[3639]: I1216 02:06:31.713813 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66b06eca-07d1-44eb-b416-c4ad8a434e78-whisker-ca-bundle\") pod \"whisker-7489f8f868-8xx2k\" (UID: \"66b06eca-07d1-44eb-b416-c4ad8a434e78\") " pod="calico-system/whisker-7489f8f868-8xx2k" Dec 16 02:06:31.715060 kubelet[3639]: I1216 02:06:31.713826 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pm4\" (UniqueName: \"kubernetes.io/projected/5f7514dc-37b5-4bac-8d4e-04c87fb3f679-kube-api-access-b7pm4\") pod \"calico-apiserver-69677bd74f-b42tq\" (UID: \"5f7514dc-37b5-4bac-8d4e-04c87fb3f679\") " pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" Dec 16 02:06:31.715060 kubelet[3639]: I1216 02:06:31.713840 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f469a490-fb18-4868-b583-cd075b9a892c-config\") pod \"goldmane-666569f655-s52v5\" (UID: \"f469a490-fb18-4868-b583-cd075b9a892c\") " pod="calico-system/goldmane-666569f655-s52v5" Dec 16 02:06:31.715137 kubelet[3639]: I1216 02:06:31.713852 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e320a820-257c-48b1-85de-c1dd7b465c9a-calico-apiserver-certs\") pod \"calico-apiserver-69677bd74f-gv2ff\" (UID: \"e320a820-257c-48b1-85de-c1dd7b465c9a\") " pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" Dec 16 02:06:31.715137 kubelet[3639]: I1216 02:06:31.713861 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w9wb\" (UniqueName: \"kubernetes.io/projected/66b06eca-07d1-44eb-b416-c4ad8a434e78-kube-api-access-2w9wb\") pod \"whisker-7489f8f868-8xx2k\" (UID: \"66b06eca-07d1-44eb-b416-c4ad8a434e78\") " pod="calico-system/whisker-7489f8f868-8xx2k" Dec 16 02:06:31.726897 containerd[2088]: time="2025-12-16T02:06:31.726851100Z" level=error msg="Failed to destroy network for sandbox \"ca3b5a82ff7b63c9f78ad11786ad14c710b1ebfe4d4bd7b930974c4030948c4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:31.728976 systemd[1]: run-netns-cni\x2d14eb4353\x2d8b6c\x2d7ee9\x2d9305\x2d2807c1398ced.mount: Deactivated successfully. Dec 16 02:06:31.738193 containerd[2088]: time="2025-12-16T02:06:31.738128685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmzsh,Uid:8d6bb703-5160-48e3-8477-a1bbde860409,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3b5a82ff7b63c9f78ad11786ad14c710b1ebfe4d4bd7b930974c4030948c4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:31.738644 kubelet[3639]: E1216 02:06:31.738488 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3b5a82ff7b63c9f78ad11786ad14c710b1ebfe4d4bd7b930974c4030948c4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:31.738644 kubelet[3639]: E1216 02:06:31.738557 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3b5a82ff7b63c9f78ad11786ad14c710b1ebfe4d4bd7b930974c4030948c4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmzsh" Dec 16 02:06:31.738644 kubelet[3639]: E1216 02:06:31.738574 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3b5a82ff7b63c9f78ad11786ad14c710b1ebfe4d4bd7b930974c4030948c4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmzsh" Dec 16 02:06:31.739134 kubelet[3639]: E1216 02:06:31.739040 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca3b5a82ff7b63c9f78ad11786ad14c710b1ebfe4d4bd7b930974c4030948c4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:06:31.933602 containerd[2088]: time="2025-12-16T02:06:31.933356354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dpg5v,Uid:ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1,Namespace:kube-system,Attempt:0,}" Dec 16 02:06:31.958394 containerd[2088]: time="2025-12-16T02:06:31.958356654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vsht2,Uid:7c04891e-a2dd-497f-9457-bad48f5fedbe,Namespace:kube-system,Attempt:0,}" Dec 16 02:06:31.983052 containerd[2088]: time="2025-12-16T02:06:31.982985237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549bfc7bd9-vst6l,Uid:222ba326-59d2-4676-b30c-82b655f93a5f,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:31.984116 containerd[2088]: time="2025-12-16T02:06:31.984021582Z" level=error msg="Failed to destroy network for sandbox \"dc5eb35885394e08b0c289045672191c729fe0da141314b1cb9204182a7c8d18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:31.985191 containerd[2088]: time="2025-12-16T02:06:31.985156324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s52v5,Uid:f469a490-fb18-4868-b583-cd075b9a892c,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:31.994891 containerd[2088]: time="2025-12-16T02:06:31.994827241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dpg5v,Uid:ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5eb35885394e08b0c289045672191c729fe0da141314b1cb9204182a7c8d18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:31.995304 kubelet[3639]: E1216 02:06:31.995248 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5eb35885394e08b0c289045672191c729fe0da141314b1cb9204182a7c8d18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:31.995304 kubelet[3639]: E1216 02:06:31.995302 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5eb35885394e08b0c289045672191c729fe0da141314b1cb9204182a7c8d18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dpg5v" Dec 16 02:06:31.996029 kubelet[3639]: E1216 02:06:31.995317 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc5eb35885394e08b0c289045672191c729fe0da141314b1cb9204182a7c8d18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dpg5v" Dec 16 02:06:31.996029 kubelet[3639]: E1216 02:06:31.995363 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dpg5v_kube-system(ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dpg5v_kube-system(ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc5eb35885394e08b0c289045672191c729fe0da141314b1cb9204182a7c8d18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dpg5v" podUID="ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1" Dec 16 02:06:31.999371 containerd[2088]: time="2025-12-16T02:06:31.999252058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7489f8f868-8xx2k,Uid:66b06eca-07d1-44eb-b416-c4ad8a434e78,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:32.010880 containerd[2088]: time="2025-12-16T02:06:32.010845941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-gv2ff,Uid:e320a820-257c-48b1-85de-c1dd7b465c9a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:06:32.017917 containerd[2088]: time="2025-12-16T02:06:32.017850947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-b42tq,Uid:5f7514dc-37b5-4bac-8d4e-04c87fb3f679,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:06:32.025706 containerd[2088]: time="2025-12-16T02:06:32.025658155Z" level=error msg="Failed to destroy network for sandbox \"2b7c0ad65cb35da46824a164125bb9f4e26d04b217aca1dbc794807546f821d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.072357 containerd[2088]: time="2025-12-16T02:06:32.072243385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vsht2,Uid:7c04891e-a2dd-497f-9457-bad48f5fedbe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7c0ad65cb35da46824a164125bb9f4e26d04b217aca1dbc794807546f821d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.072714 kubelet[3639]: E1216 02:06:32.072502 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7c0ad65cb35da46824a164125bb9f4e26d04b217aca1dbc794807546f821d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.072714 kubelet[3639]: E1216 02:06:32.072747 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7c0ad65cb35da46824a164125bb9f4e26d04b217aca1dbc794807546f821d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vsht2" Dec 16 02:06:32.072714 kubelet[3639]: E1216 02:06:32.072770 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7c0ad65cb35da46824a164125bb9f4e26d04b217aca1dbc794807546f821d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vsht2" Dec 16 02:06:32.074031 kubelet[3639]: E1216 02:06:32.073196 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vsht2_kube-system(7c04891e-a2dd-497f-9457-bad48f5fedbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vsht2_kube-system(7c04891e-a2dd-497f-9457-bad48f5fedbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b7c0ad65cb35da46824a164125bb9f4e26d04b217aca1dbc794807546f821d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vsht2" podUID="7c04891e-a2dd-497f-9457-bad48f5fedbe" Dec 16 02:06:32.079009 containerd[2088]: time="2025-12-16T02:06:32.078949901Z" level=error msg="Failed to destroy network for sandbox \"a54dc46fa4c49d86d479f26a31a61dc982d9b6692f645ffdb988323efb65fc1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.095641 containerd[2088]: time="2025-12-16T02:06:32.095581422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549bfc7bd9-vst6l,Uid:222ba326-59d2-4676-b30c-82b655f93a5f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54dc46fa4c49d86d479f26a31a61dc982d9b6692f645ffdb988323efb65fc1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.095846 kubelet[3639]: E1216 02:06:32.095804 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54dc46fa4c49d86d479f26a31a61dc982d9b6692f645ffdb988323efb65fc1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.095898 kubelet[3639]: E1216 02:06:32.095847 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54dc46fa4c49d86d479f26a31a61dc982d9b6692f645ffdb988323efb65fc1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" Dec 16 02:06:32.095898 kubelet[3639]: E1216 02:06:32.095862 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54dc46fa4c49d86d479f26a31a61dc982d9b6692f645ffdb988323efb65fc1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" Dec 16 02:06:32.095934 kubelet[3639]: E1216 02:06:32.095898 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-549bfc7bd9-vst6l_calico-system(222ba326-59d2-4676-b30c-82b655f93a5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-549bfc7bd9-vst6l_calico-system(222ba326-59d2-4676-b30c-82b655f93a5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a54dc46fa4c49d86d479f26a31a61dc982d9b6692f645ffdb988323efb65fc1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:06:32.125011 containerd[2088]: time="2025-12-16T02:06:32.124947600Z" level=error msg="Failed to destroy network for sandbox \"d81423fbe1dd7cbc05988316f35564cf3553c08551e0f2148c5bca446d59bea8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.131510 containerd[2088]: time="2025-12-16T02:06:32.131396756Z" level=error msg="Failed to destroy network for sandbox \"d9b037e042966d3e8f17d017fb3675d1db75b3c4bf94253e86c594750fbe570c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.133289 containerd[2088]: time="2025-12-16T02:06:32.133254937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s52v5,Uid:f469a490-fb18-4868-b583-cd075b9a892c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d81423fbe1dd7cbc05988316f35564cf3553c08551e0f2148c5bca446d59bea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.134487 kubelet[3639]: E1216 02:06:32.134441 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d81423fbe1dd7cbc05988316f35564cf3553c08551e0f2148c5bca446d59bea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.134573 kubelet[3639]: E1216 02:06:32.134503 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d81423fbe1dd7cbc05988316f35564cf3553c08551e0f2148c5bca446d59bea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s52v5" Dec 16 02:06:32.134573 kubelet[3639]: E1216 02:06:32.134522 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d81423fbe1dd7cbc05988316f35564cf3553c08551e0f2148c5bca446d59bea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s52v5" Dec 16 02:06:32.134654 kubelet[3639]: E1216 02:06:32.134567 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s52v5_calico-system(f469a490-fb18-4868-b583-cd075b9a892c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s52v5_calico-system(f469a490-fb18-4868-b583-cd075b9a892c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d81423fbe1dd7cbc05988316f35564cf3553c08551e0f2148c5bca446d59bea8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:06:32.147410 containerd[2088]: time="2025-12-16T02:06:32.147137112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7489f8f868-8xx2k,Uid:66b06eca-07d1-44eb-b416-c4ad8a434e78,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b037e042966d3e8f17d017fb3675d1db75b3c4bf94253e86c594750fbe570c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.147851 kubelet[3639]: E1216 02:06:32.147809 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b037e042966d3e8f17d017fb3675d1db75b3c4bf94253e86c594750fbe570c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.147985 kubelet[3639]: E1216 02:06:32.147863 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b037e042966d3e8f17d017fb3675d1db75b3c4bf94253e86c594750fbe570c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7489f8f868-8xx2k" Dec 16 02:06:32.147985 kubelet[3639]: E1216 02:06:32.147880 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9b037e042966d3e8f17d017fb3675d1db75b3c4bf94253e86c594750fbe570c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7489f8f868-8xx2k" Dec 16 02:06:32.147985 kubelet[3639]: E1216 02:06:32.147920 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7489f8f868-8xx2k_calico-system(66b06eca-07d1-44eb-b416-c4ad8a434e78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7489f8f868-8xx2k_calico-system(66b06eca-07d1-44eb-b416-c4ad8a434e78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9b037e042966d3e8f17d017fb3675d1db75b3c4bf94253e86c594750fbe570c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7489f8f868-8xx2k" podUID="66b06eca-07d1-44eb-b416-c4ad8a434e78" Dec 16 02:06:32.149407 containerd[2088]: time="2025-12-16T02:06:32.149368361Z" level=error msg="Failed to destroy network for sandbox \"e0291d7c55d087489cc843e954502156a468579f98253b2033d7e9bb0186b72f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.152751 containerd[2088]: time="2025-12-16T02:06:32.152718366Z" level=error msg="Failed to destroy network for sandbox \"b666f093760a887a2fb7bf5fe5767b3b59a6fcc303efe0f1297320479ac5c765\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.165085 containerd[2088]: time="2025-12-16T02:06:32.164858604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-gv2ff,Uid:e320a820-257c-48b1-85de-c1dd7b465c9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0291d7c55d087489cc843e954502156a468579f98253b2033d7e9bb0186b72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.165173 kubelet[3639]: E1216 02:06:32.165027 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0291d7c55d087489cc843e954502156a468579f98253b2033d7e9bb0186b72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.165173 kubelet[3639]: E1216 02:06:32.165068 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0291d7c55d087489cc843e954502156a468579f98253b2033d7e9bb0186b72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" Dec 16 02:06:32.165173 kubelet[3639]: E1216 02:06:32.165084 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0291d7c55d087489cc843e954502156a468579f98253b2033d7e9bb0186b72f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" Dec 16 02:06:32.165245 kubelet[3639]: E1216 02:06:32.165115 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69677bd74f-gv2ff_calico-apiserver(e320a820-257c-48b1-85de-c1dd7b465c9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69677bd74f-gv2ff_calico-apiserver(e320a820-257c-48b1-85de-c1dd7b465c9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0291d7c55d087489cc843e954502156a468579f98253b2033d7e9bb0186b72f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:06:32.168715 containerd[2088]: time="2025-12-16T02:06:32.168655249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-b42tq,Uid:5f7514dc-37b5-4bac-8d4e-04c87fb3f679,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b666f093760a887a2fb7bf5fe5767b3b59a6fcc303efe0f1297320479ac5c765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.168852 kubelet[3639]: E1216 02:06:32.168824 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b666f093760a887a2fb7bf5fe5767b3b59a6fcc303efe0f1297320479ac5c765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:32.168889 kubelet[3639]: E1216 02:06:32.168857 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b666f093760a887a2fb7bf5fe5767b3b59a6fcc303efe0f1297320479ac5c765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" Dec 16 02:06:32.168889 kubelet[3639]: E1216 02:06:32.168870 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b666f093760a887a2fb7bf5fe5767b3b59a6fcc303efe0f1297320479ac5c765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" Dec 16 02:06:32.168929 kubelet[3639]: E1216 02:06:32.168902 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69677bd74f-b42tq_calico-apiserver(5f7514dc-37b5-4bac-8d4e-04c87fb3f679)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69677bd74f-b42tq_calico-apiserver(5f7514dc-37b5-4bac-8d4e-04c87fb3f679)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b666f093760a887a2fb7bf5fe5767b3b59a6fcc303efe0f1297320479ac5c765\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:06:32.385804 containerd[2088]: time="2025-12-16T02:06:32.385738586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 02:06:43.290801 containerd[2088]: time="2025-12-16T02:06:43.289885695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-b42tq,Uid:5f7514dc-37b5-4bac-8d4e-04c87fb3f679,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:06:43.291268 containerd[2088]: time="2025-12-16T02:06:43.291246675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vsht2,Uid:7c04891e-a2dd-497f-9457-bad48f5fedbe,Namespace:kube-system,Attempt:0,}" Dec 16 02:06:43.291624 containerd[2088]: time="2025-12-16T02:06:43.291595126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s52v5,Uid:f469a490-fb18-4868-b583-cd075b9a892c,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:43.331343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207558801.mount: Deactivated successfully. Dec 16 02:06:45.290661 containerd[2088]: time="2025-12-16T02:06:45.290584982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7489f8f868-8xx2k,Uid:66b06eca-07d1-44eb-b416-c4ad8a434e78,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:45.290661 containerd[2088]: time="2025-12-16T02:06:45.290584966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmzsh,Uid:8d6bb703-5160-48e3-8477-a1bbde860409,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:46.289905 containerd[2088]: time="2025-12-16T02:06:46.289726326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-gv2ff,Uid:e320a820-257c-48b1-85de-c1dd7b465c9a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:06:46.289905 containerd[2088]: time="2025-12-16T02:06:46.289827946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549bfc7bd9-vst6l,Uid:222ba326-59d2-4676-b30c-82b655f93a5f,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:46.289905 containerd[2088]: time="2025-12-16T02:06:46.289860491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dpg5v,Uid:ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1,Namespace:kube-system,Attempt:0,}" Dec 16 02:06:50.454813 containerd[2088]: time="2025-12-16T02:06:50.453398907Z" level=error msg="Failed to destroy network for sandbox \"5981ff5994e95e83c2144eba045d7988f4615c9ca617c870972983c4087bfaf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:50.454732 systemd[1]: run-netns-cni\x2d2f52d7b1\x2d46d0\x2d8723\x2df403\x2df9783b5aa5b6.mount: Deactivated successfully. Dec 16 02:06:50.711577 containerd[2088]: time="2025-12-16T02:06:50.711089329Z" level=error msg="Failed to destroy network for sandbox \"0e44afa764d58f343268610a9917b4ee282d001248f1a731427c20e1861c1539\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:50.755313 containerd[2088]: time="2025-12-16T02:06:50.755130072Z" level=error msg="Failed to destroy network for sandbox \"d146a9b96dc9cb39eb134fe7b648b30d1540abf750c4b4f6a939e0eb92812b52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.000122 containerd[2088]: time="2025-12-16T02:06:51.000017353Z" level=error msg="Failed to destroy network for sandbox \"4ad1fc159b53f50b98dd25f9a11f0babc2f8ca2b56e5a011df085e74cfc70401\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.042346 containerd[2088]: time="2025-12-16T02:06:51.042238693Z" level=error msg="Failed to destroy network for sandbox \"1791a12c6c91f43019206c34c983baa968431fc8f3af3400e547993f752039b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.109878 containerd[2088]: time="2025-12-16T02:06:51.109799625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s52v5,Uid:f469a490-fb18-4868-b583-cd075b9a892c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5981ff5994e95e83c2144eba045d7988f4615c9ca617c870972983c4087bfaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.110139 kubelet[3639]: E1216 02:06:51.110019 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5981ff5994e95e83c2144eba045d7988f4615c9ca617c870972983c4087bfaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.110139 kubelet[3639]: E1216 02:06:51.110076 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5981ff5994e95e83c2144eba045d7988f4615c9ca617c870972983c4087bfaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s52v5" Dec 16 02:06:51.110139 kubelet[3639]: E1216 02:06:51.110092 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5981ff5994e95e83c2144eba045d7988f4615c9ca617c870972983c4087bfaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s52v5" Dec 16 02:06:51.110515 kubelet[3639]: E1216 02:06:51.110138 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s52v5_calico-system(f469a490-fb18-4868-b583-cd075b9a892c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s52v5_calico-system(f469a490-fb18-4868-b583-cd075b9a892c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5981ff5994e95e83c2144eba045d7988f4615c9ca617c870972983c4087bfaf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:06:51.199087 containerd[2088]: time="2025-12-16T02:06:51.198886034Z" level=error msg="Failed to destroy network for sandbox \"569b8386ff78dcffee764b7c35455f9629e9ab15840dea883d5eced0277ae1b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.203921 containerd[2088]: time="2025-12-16T02:06:51.203887926Z" level=error msg="Failed to destroy network for sandbox \"f259d74ab7837616e536afab1c80051a788838ecc5af15b9e454e9ab44763fb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.253273 containerd[2088]: time="2025-12-16T02:06:51.252968012Z" level=error msg="Failed to destroy network for sandbox \"b6e59d6d7bbe50c5840dfd234aab4681c28458a1ed2e852051ddc40607a35ea0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.269661 containerd[2088]: time="2025-12-16T02:06:51.269624399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:51.318944 containerd[2088]: time="2025-12-16T02:06:51.318847681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-b42tq,Uid:5f7514dc-37b5-4bac-8d4e-04c87fb3f679,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e44afa764d58f343268610a9917b4ee282d001248f1a731427c20e1861c1539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.319072 kubelet[3639]: E1216 02:06:51.319026 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e44afa764d58f343268610a9917b4ee282d001248f1a731427c20e1861c1539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.319109 kubelet[3639]: E1216 02:06:51.319074 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e44afa764d58f343268610a9917b4ee282d001248f1a731427c20e1861c1539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" Dec 16 02:06:51.319109 kubelet[3639]: E1216 02:06:51.319091 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e44afa764d58f343268610a9917b4ee282d001248f1a731427c20e1861c1539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" Dec 16 02:06:51.319183 kubelet[3639]: E1216 02:06:51.319131 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69677bd74f-b42tq_calico-apiserver(5f7514dc-37b5-4bac-8d4e-04c87fb3f679)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69677bd74f-b42tq_calico-apiserver(5f7514dc-37b5-4bac-8d4e-04c87fb3f679)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e44afa764d58f343268610a9917b4ee282d001248f1a731427c20e1861c1539\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:06:51.371749 systemd[1]: run-netns-cni\x2d53d0dc90\x2db534\x2d7429\x2de2c6\x2d44f827fccdb1.mount: Deactivated successfully. Dec 16 02:06:51.371854 systemd[1]: run-netns-cni\x2d69ddae1b\x2d39d3\x2d131b\x2dfc1e\x2da1d3f68dda1c.mount: Deactivated successfully. Dec 16 02:06:51.371889 systemd[1]: run-netns-cni\x2d0baf7c6b\x2d735e\x2de7e5\x2d9e79\x2d994c6b4520d2.mount: Deactivated successfully. Dec 16 02:06:51.371928 systemd[1]: run-netns-cni\x2df5ce3f85\x2d1998\x2d07f9\x2dad94\x2d8119604eec58.mount: Deactivated successfully. Dec 16 02:06:51.410569 containerd[2088]: time="2025-12-16T02:06:51.410516422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vsht2,Uid:7c04891e-a2dd-497f-9457-bad48f5fedbe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d146a9b96dc9cb39eb134fe7b648b30d1540abf750c4b4f6a939e0eb92812b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.410849 kubelet[3639]: E1216 02:06:51.410760 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d146a9b96dc9cb39eb134fe7b648b30d1540abf750c4b4f6a939e0eb92812b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.410959 kubelet[3639]: E1216 02:06:51.410944 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d146a9b96dc9cb39eb134fe7b648b30d1540abf750c4b4f6a939e0eb92812b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vsht2" Dec 16 02:06:51.411088 kubelet[3639]: E1216 02:06:51.411006 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d146a9b96dc9cb39eb134fe7b648b30d1540abf750c4b4f6a939e0eb92812b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vsht2" Dec 16 02:06:51.411165 kubelet[3639]: E1216 02:06:51.411143 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vsht2_kube-system(7c04891e-a2dd-497f-9457-bad48f5fedbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vsht2_kube-system(7c04891e-a2dd-497f-9457-bad48f5fedbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d146a9b96dc9cb39eb134fe7b648b30d1540abf750c4b4f6a939e0eb92812b52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vsht2" podUID="7c04891e-a2dd-497f-9457-bad48f5fedbe" Dec 16 02:06:51.568834 containerd[2088]: time="2025-12-16T02:06:51.568666260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 02:06:51.573181 containerd[2088]: time="2025-12-16T02:06:51.573142055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7489f8f868-8xx2k,Uid:66b06eca-07d1-44eb-b416-c4ad8a434e78,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad1fc159b53f50b98dd25f9a11f0babc2f8ca2b56e5a011df085e74cfc70401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.573542 kubelet[3639]: E1216 02:06:51.573380 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad1fc159b53f50b98dd25f9a11f0babc2f8ca2b56e5a011df085e74cfc70401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.573542 kubelet[3639]: E1216 02:06:51.573441 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad1fc159b53f50b98dd25f9a11f0babc2f8ca2b56e5a011df085e74cfc70401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7489f8f868-8xx2k" Dec 16 02:06:51.573542 kubelet[3639]: E1216 02:06:51.573458 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad1fc159b53f50b98dd25f9a11f0babc2f8ca2b56e5a011df085e74cfc70401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7489f8f868-8xx2k" Dec 16 02:06:51.573649 kubelet[3639]: E1216 02:06:51.573507 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7489f8f868-8xx2k_calico-system(66b06eca-07d1-44eb-b416-c4ad8a434e78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7489f8f868-8xx2k_calico-system(66b06eca-07d1-44eb-b416-c4ad8a434e78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ad1fc159b53f50b98dd25f9a11f0babc2f8ca2b56e5a011df085e74cfc70401\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7489f8f868-8xx2k" podUID="66b06eca-07d1-44eb-b416-c4ad8a434e78" Dec 16 02:06:51.617483 containerd[2088]: time="2025-12-16T02:06:51.617416639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmzsh,Uid:8d6bb703-5160-48e3-8477-a1bbde860409,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1791a12c6c91f43019206c34c983baa968431fc8f3af3400e547993f752039b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.618093 kubelet[3639]: E1216 02:06:51.618060 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1791a12c6c91f43019206c34c983baa968431fc8f3af3400e547993f752039b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.618196 kubelet[3639]: E1216 02:06:51.618111 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1791a12c6c91f43019206c34c983baa968431fc8f3af3400e547993f752039b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmzsh" Dec 16 02:06:51.618196 kubelet[3639]: E1216 02:06:51.618126 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1791a12c6c91f43019206c34c983baa968431fc8f3af3400e547993f752039b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmzsh" Dec 16 02:06:51.618196 kubelet[3639]: E1216 02:06:51.618170 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1791a12c6c91f43019206c34c983baa968431fc8f3af3400e547993f752039b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:06:51.665153 containerd[2088]: time="2025-12-16T02:06:51.665095486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-gv2ff,Uid:e320a820-257c-48b1-85de-c1dd7b465c9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"569b8386ff78dcffee764b7c35455f9629e9ab15840dea883d5eced0277ae1b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.665409 kubelet[3639]: E1216 02:06:51.665366 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569b8386ff78dcffee764b7c35455f9629e9ab15840dea883d5eced0277ae1b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.665473 kubelet[3639]: E1216 02:06:51.665425 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569b8386ff78dcffee764b7c35455f9629e9ab15840dea883d5eced0277ae1b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" Dec 16 02:06:51.665473 kubelet[3639]: E1216 02:06:51.665440 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569b8386ff78dcffee764b7c35455f9629e9ab15840dea883d5eced0277ae1b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" Dec 16 02:06:51.665543 kubelet[3639]: E1216 02:06:51.665481 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69677bd74f-gv2ff_calico-apiserver(e320a820-257c-48b1-85de-c1dd7b465c9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69677bd74f-gv2ff_calico-apiserver(e320a820-257c-48b1-85de-c1dd7b465c9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"569b8386ff78dcffee764b7c35455f9629e9ab15840dea883d5eced0277ae1b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:06:51.712323 containerd[2088]: time="2025-12-16T02:06:51.712170641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549bfc7bd9-vst6l,Uid:222ba326-59d2-4676-b30c-82b655f93a5f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259d74ab7837616e536afab1c80051a788838ecc5af15b9e454e9ab44763fb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.712670 kubelet[3639]: E1216 02:06:51.712585 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259d74ab7837616e536afab1c80051a788838ecc5af15b9e454e9ab44763fb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.712765 kubelet[3639]: E1216 02:06:51.712682 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259d74ab7837616e536afab1c80051a788838ecc5af15b9e454e9ab44763fb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" Dec 16 02:06:51.712765 kubelet[3639]: E1216 02:06:51.712712 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259d74ab7837616e536afab1c80051a788838ecc5af15b9e454e9ab44763fb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" Dec 16 02:06:51.713408 kubelet[3639]: E1216 02:06:51.712769 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-549bfc7bd9-vst6l_calico-system(222ba326-59d2-4676-b30c-82b655f93a5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-549bfc7bd9-vst6l_calico-system(222ba326-59d2-4676-b30c-82b655f93a5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f259d74ab7837616e536afab1c80051a788838ecc5af15b9e454e9ab44763fb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:06:51.758363 containerd[2088]: time="2025-12-16T02:06:51.758226518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dpg5v,Uid:ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e59d6d7bbe50c5840dfd234aab4681c28458a1ed2e852051ddc40607a35ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.759155 kubelet[3639]: E1216 02:06:51.759095 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e59d6d7bbe50c5840dfd234aab4681c28458a1ed2e852051ddc40607a35ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:06:51.759469 kubelet[3639]: E1216 02:06:51.759426 3639 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e59d6d7bbe50c5840dfd234aab4681c28458a1ed2e852051ddc40607a35ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dpg5v" Dec 16 02:06:51.759469 kubelet[3639]: E1216 02:06:51.759453 3639 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6e59d6d7bbe50c5840dfd234aab4681c28458a1ed2e852051ddc40607a35ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dpg5v" Dec 16 02:06:51.759780 kubelet[3639]: E1216 02:06:51.759648 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dpg5v_kube-system(ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dpg5v_kube-system(ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6e59d6d7bbe50c5840dfd234aab4681c28458a1ed2e852051ddc40607a35ea0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dpg5v" podUID="ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1" Dec 16 02:06:51.805808 containerd[2088]: time="2025-12-16T02:06:51.805749646Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:51.870015 containerd[2088]: time="2025-12-16T02:06:51.869860010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:06:51.870663 containerd[2088]: time="2025-12-16T02:06:51.870616202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 19.484841567s" Dec 16 02:06:51.870663 containerd[2088]: time="2025-12-16T02:06:51.870648899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 02:06:51.886973 containerd[2088]: time="2025-12-16T02:06:51.886937836Z" level=info msg="CreateContainer within sandbox \"582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 02:06:52.111582 containerd[2088]: time="2025-12-16T02:06:52.111328327Z" level=info msg="Container 3478bfa825cab23fafbff9a383d6315a8eebb1093168fc4adfbbff437163d803: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:06:52.268807 containerd[2088]: time="2025-12-16T02:06:52.268728306Z" level=info msg="CreateContainer within sandbox \"582198c85b43813ea85391dd3f04a8292ccce30f74866bc3ee2e9c9cc8e4c336\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3478bfa825cab23fafbff9a383d6315a8eebb1093168fc4adfbbff437163d803\"" Dec 16 02:06:52.269421 containerd[2088]: time="2025-12-16T02:06:52.269392855Z" level=info msg="StartContainer for \"3478bfa825cab23fafbff9a383d6315a8eebb1093168fc4adfbbff437163d803\"" Dec 16 02:06:52.271487 containerd[2088]: time="2025-12-16T02:06:52.271464667Z" level=info msg="connecting to shim 3478bfa825cab23fafbff9a383d6315a8eebb1093168fc4adfbbff437163d803" address="unix:///run/containerd/s/6f64e85e619e0dbe5c2b798be72f51e0d62683a54ba25ebe7d48a1f7e7b375b3" protocol=ttrpc version=3 Dec 16 02:06:52.287940 systemd[1]: Started cri-containerd-3478bfa825cab23fafbff9a383d6315a8eebb1093168fc4adfbbff437163d803.scope - libcontainer container 3478bfa825cab23fafbff9a383d6315a8eebb1093168fc4adfbbff437163d803. Dec 16 02:06:52.318000 audit: BPF prog-id=196 op=LOAD Dec 16 02:06:52.318000 audit[4842]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4119 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:52.338968 kernel: audit: type=1334 audit(1765850812.318:603): prog-id=196 op=LOAD Dec 16 02:06:52.339063 kernel: audit: type=1300 audit(1765850812.318:603): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4119 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:52.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373862666138323563616232336661666266663961333833643633 Dec 16 02:06:52.355478 kernel: audit: type=1327 audit(1765850812.318:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373862666138323563616232336661666266663961333833643633 Dec 16 02:06:52.318000 audit: BPF prog-id=197 op=LOAD Dec 16 02:06:52.360789 kernel: audit: type=1334 audit(1765850812.318:604): prog-id=197 op=LOAD Dec 16 02:06:52.318000 audit[4842]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4119 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:52.377331 kernel: audit: type=1300 audit(1765850812.318:604): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4119 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:52.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373862666138323563616232336661666266663961333833643633 Dec 16 02:06:52.397387 kernel: audit: type=1327 audit(1765850812.318:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373862666138323563616232336661666266663961333833643633 Dec 16 02:06:52.318000 audit: BPF prog-id=197 op=UNLOAD Dec 16 02:06:52.401956 kernel: audit: type=1334 audit(1765850812.318:605): prog-id=197 op=UNLOAD Dec 16 02:06:52.318000 audit[4842]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:52.417207 kernel: audit: type=1300 audit(1765850812.318:605): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:52.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373862666138323563616232336661666266663961333833643633 Dec 16 02:06:52.433298 kernel: audit: type=1327 audit(1765850812.318:605): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373862666138323563616232336661666266663961333833643633 Dec 16 02:06:52.318000 audit: BPF prog-id=196 op=UNLOAD Dec 16 02:06:52.439529 kernel: audit: type=1334 audit(1765850812.318:606): prog-id=196 op=UNLOAD Dec 16 02:06:52.318000 audit[4842]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4119 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:52.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373862666138323563616232336661666266663961333833643633 Dec 16 02:06:52.318000 audit: BPF prog-id=198 op=LOAD Dec 16 02:06:52.318000 audit[4842]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4119 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:52.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334373862666138323563616232336661666266663961333833643633 Dec 16 02:06:52.450061 containerd[2088]: time="2025-12-16T02:06:52.449997619Z" level=info msg="StartContainer for \"3478bfa825cab23fafbff9a383d6315a8eebb1093168fc4adfbbff437163d803\" returns successfully" Dec 16 02:06:52.770573 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 02:06:52.770701 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 02:06:52.944610 kubelet[3639]: I1216 02:06:52.944533 3639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9wb\" (UniqueName: \"kubernetes.io/projected/66b06eca-07d1-44eb-b416-c4ad8a434e78-kube-api-access-2w9wb\") pod \"66b06eca-07d1-44eb-b416-c4ad8a434e78\" (UID: \"66b06eca-07d1-44eb-b416-c4ad8a434e78\") " Dec 16 02:06:52.944610 kubelet[3639]: I1216 02:06:52.944592 3639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66b06eca-07d1-44eb-b416-c4ad8a434e78-whisker-backend-key-pair\") pod \"66b06eca-07d1-44eb-b416-c4ad8a434e78\" (UID: \"66b06eca-07d1-44eb-b416-c4ad8a434e78\") " Dec 16 02:06:52.945935 kubelet[3639]: I1216 02:06:52.945149 3639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66b06eca-07d1-44eb-b416-c4ad8a434e78-whisker-ca-bundle\") pod \"66b06eca-07d1-44eb-b416-c4ad8a434e78\" (UID: \"66b06eca-07d1-44eb-b416-c4ad8a434e78\") " Dec 16 02:06:52.945935 kubelet[3639]: I1216 02:06:52.945625 3639 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b06eca-07d1-44eb-b416-c4ad8a434e78-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "66b06eca-07d1-44eb-b416-c4ad8a434e78" (UID: "66b06eca-07d1-44eb-b416-c4ad8a434e78"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 02:06:52.947897 systemd[1]: var-lib-kubelet-pods-66b06eca\x2d07d1\x2d44eb\x2db416\x2dc4ad8a434e78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2w9wb.mount: Deactivated successfully. Dec 16 02:06:52.951062 kubelet[3639]: I1216 02:06:52.950901 3639 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b06eca-07d1-44eb-b416-c4ad8a434e78-kube-api-access-2w9wb" (OuterVolumeSpecName: "kube-api-access-2w9wb") pod "66b06eca-07d1-44eb-b416-c4ad8a434e78" (UID: "66b06eca-07d1-44eb-b416-c4ad8a434e78"). InnerVolumeSpecName "kube-api-access-2w9wb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 02:06:52.952679 systemd[1]: var-lib-kubelet-pods-66b06eca\x2d07d1\x2d44eb\x2db416\x2dc4ad8a434e78-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 02:06:52.956813 kubelet[3639]: I1216 02:06:52.956765 3639 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b06eca-07d1-44eb-b416-c4ad8a434e78-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "66b06eca-07d1-44eb-b416-c4ad8a434e78" (UID: "66b06eca-07d1-44eb-b416-c4ad8a434e78"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 02:06:53.045952 kubelet[3639]: I1216 02:06:53.045842 3639 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66b06eca-07d1-44eb-b416-c4ad8a434e78-whisker-backend-key-pair\") on node \"ci-4547.0.0-a-de7f477aa9\" DevicePath \"\"" Dec 16 02:06:53.046300 kubelet[3639]: I1216 02:06:53.046285 3639 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66b06eca-07d1-44eb-b416-c4ad8a434e78-whisker-ca-bundle\") on node \"ci-4547.0.0-a-de7f477aa9\" DevicePath \"\"" Dec 16 02:06:53.046400 kubelet[3639]: I1216 02:06:53.046365 3639 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2w9wb\" (UniqueName: \"kubernetes.io/projected/66b06eca-07d1-44eb-b416-c4ad8a434e78-kube-api-access-2w9wb\") on node \"ci-4547.0.0-a-de7f477aa9\" DevicePath \"\"" Dec 16 02:06:53.294070 systemd[1]: Removed slice kubepods-besteffort-pod66b06eca_07d1_44eb_b416_c4ad8a434e78.slice - libcontainer container kubepods-besteffort-pod66b06eca_07d1_44eb_b416_c4ad8a434e78.slice. Dec 16 02:06:53.488389 kubelet[3639]: I1216 02:06:53.488263 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d4x59" podStartSLOduration=2.953820374 podStartE2EDuration="33.488246947s" podCreationTimestamp="2025-12-16 02:06:20 +0000 UTC" firstStartedPulling="2025-12-16 02:06:21.33688254 +0000 UTC m=+24.234536316" lastFinishedPulling="2025-12-16 02:06:51.871309105 +0000 UTC m=+54.768962889" observedRunningTime="2025-12-16 02:06:53.472047788 +0000 UTC m=+56.369701588" watchObservedRunningTime="2025-12-16 02:06:53.488246947 +0000 UTC m=+56.385900723" Dec 16 02:06:53.578572 systemd[1]: Created slice kubepods-besteffort-pod81a4ac18_79c3_4965_a33a_757950d44671.slice - libcontainer container kubepods-besteffort-pod81a4ac18_79c3_4965_a33a_757950d44671.slice. Dec 16 02:06:53.649894 kubelet[3639]: I1216 02:06:53.649762 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81a4ac18-79c3-4965-a33a-757950d44671-whisker-backend-key-pair\") pod \"whisker-6f9cf5fb98-qdzq2\" (UID: \"81a4ac18-79c3-4965-a33a-757950d44671\") " pod="calico-system/whisker-6f9cf5fb98-qdzq2" Dec 16 02:06:53.649894 kubelet[3639]: I1216 02:06:53.649816 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a4ac18-79c3-4965-a33a-757950d44671-whisker-ca-bundle\") pod \"whisker-6f9cf5fb98-qdzq2\" (UID: \"81a4ac18-79c3-4965-a33a-757950d44671\") " pod="calico-system/whisker-6f9cf5fb98-qdzq2" Dec 16 02:06:53.649894 kubelet[3639]: I1216 02:06:53.649828 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5xsc\" (UniqueName: \"kubernetes.io/projected/81a4ac18-79c3-4965-a33a-757950d44671-kube-api-access-q5xsc\") pod \"whisker-6f9cf5fb98-qdzq2\" (UID: \"81a4ac18-79c3-4965-a33a-757950d44671\") " pod="calico-system/whisker-6f9cf5fb98-qdzq2" Dec 16 02:06:53.882332 containerd[2088]: time="2025-12-16T02:06:53.882284029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9cf5fb98-qdzq2,Uid:81a4ac18-79c3-4965-a33a-757950d44671,Namespace:calico-system,Attempt:0,}" Dec 16 02:06:54.005971 systemd-networkd[1667]: cali80b2eab4999: Link UP Dec 16 02:06:54.006619 systemd-networkd[1667]: cali80b2eab4999: Gained carrier Dec 16 02:06:54.027771 containerd[2088]: 2025-12-16 02:06:53.910 [INFO][4932] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:06:54.027771 containerd[2088]: 2025-12-16 02:06:53.945 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0 whisker-6f9cf5fb98- calico-system 81a4ac18-79c3-4965-a33a-757950d44671 963 0 2025-12-16 02:06:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f9cf5fb98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-a-de7f477aa9 whisker-6f9cf5fb98-qdzq2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali80b2eab4999 [] [] }} ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Namespace="calico-system" Pod="whisker-6f9cf5fb98-qdzq2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-" Dec 16 02:06:54.027771 containerd[2088]: 2025-12-16 02:06:53.945 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Namespace="calico-system" Pod="whisker-6f9cf5fb98-qdzq2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" Dec 16 02:06:54.027771 containerd[2088]: 2025-12-16 02:06:53.963 [INFO][4944] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" HandleID="k8s-pod-network.99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Workload="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" Dec 16 02:06:54.027954 containerd[2088]: 2025-12-16 02:06:53.963 [INFO][4944] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" HandleID="k8s-pod-network.99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Workload="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-de7f477aa9", "pod":"whisker-6f9cf5fb98-qdzq2", "timestamp":"2025-12-16 02:06:53.963077078 +0000 UTC"}, Hostname:"ci-4547.0.0-a-de7f477aa9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:06:54.027954 containerd[2088]: 2025-12-16 02:06:53.963 [INFO][4944] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:06:54.027954 containerd[2088]: 2025-12-16 02:06:53.963 [INFO][4944] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:06:54.027954 containerd[2088]: 2025-12-16 02:06:53.963 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-de7f477aa9' Dec 16 02:06:54.027954 containerd[2088]: 2025-12-16 02:06:53.968 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:06:54.027954 containerd[2088]: 2025-12-16 02:06:53.972 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:06:54.027954 containerd[2088]: 2025-12-16 02:06:53.975 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:06:54.027954 containerd[2088]: 2025-12-16 02:06:53.977 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:06:54.027954 containerd[2088]: 2025-12-16 02:06:53.978 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:06:54.028085 containerd[2088]: 2025-12-16 02:06:53.978 [INFO][4944] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:06:54.028085 containerd[2088]: 2025-12-16 02:06:53.980 [INFO][4944] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e Dec 16 02:06:54.028085 containerd[2088]: 2025-12-16 02:06:53.988 [INFO][4944] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:06:54.028085 containerd[2088]: 2025-12-16 02:06:53.996 [INFO][4944] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.1/26] block=192.168.4.0/26 handle="k8s-pod-network.99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:06:54.028085 containerd[2088]: 2025-12-16 02:06:53.996 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.1/26] handle="k8s-pod-network.99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:06:54.028085 containerd[2088]: 2025-12-16 02:06:53.997 [INFO][4944] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:06:54.028085 containerd[2088]: 2025-12-16 02:06:53.997 [INFO][4944] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.1/26] IPv6=[] ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" HandleID="k8s-pod-network.99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Workload="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" Dec 16 02:06:54.028203 containerd[2088]: 2025-12-16 02:06:53.999 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Namespace="calico-system" Pod="whisker-6f9cf5fb98-qdzq2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0", GenerateName:"whisker-6f9cf5fb98-", Namespace:"calico-system", SelfLink:"", UID:"81a4ac18-79c3-4965-a33a-757950d44671", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9cf5fb98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"", Pod:"whisker-6f9cf5fb98-qdzq2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali80b2eab4999", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:06:54.028203 containerd[2088]: 2025-12-16 02:06:53.999 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.1/32] ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Namespace="calico-system" Pod="whisker-6f9cf5fb98-qdzq2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" Dec 16 02:06:54.028251 containerd[2088]: 2025-12-16 02:06:53.999 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80b2eab4999 ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Namespace="calico-system" Pod="whisker-6f9cf5fb98-qdzq2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" Dec 16 02:06:54.028251 containerd[2088]: 2025-12-16 02:06:54.006 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Namespace="calico-system" Pod="whisker-6f9cf5fb98-qdzq2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" Dec 16 02:06:54.028281 containerd[2088]: 2025-12-16 02:06:54.007 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Namespace="calico-system" Pod="whisker-6f9cf5fb98-qdzq2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0", GenerateName:"whisker-6f9cf5fb98-", Namespace:"calico-system", SelfLink:"", UID:"81a4ac18-79c3-4965-a33a-757950d44671", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9cf5fb98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e", Pod:"whisker-6f9cf5fb98-qdzq2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali80b2eab4999", MAC:"a6:ea:23:bb:ac:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:06:54.028313 containerd[2088]: 2025-12-16 02:06:54.026 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" Namespace="calico-system" Pod="whisker-6f9cf5fb98-qdzq2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-whisker--6f9cf5fb98--qdzq2-eth0" Dec 16 02:06:54.071512 containerd[2088]: time="2025-12-16T02:06:54.071481160Z" level=info msg="connecting to shim 99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e" address="unix:///run/containerd/s/7809cf9d699a0a50d47b910538bcdd759dd37cb09bb5a6de72b0cc84ef5b8ef2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:06:54.114891 systemd[1]: Started cri-containerd-99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e.scope - libcontainer container 99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e. Dec 16 02:06:54.132000 audit: BPF prog-id=199 op=LOAD Dec 16 02:06:54.133000 audit: BPF prog-id=200 op=LOAD Dec 16 02:06:54.133000 audit[4996]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4974 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656464353736373030643135666666336561333634656165383632 Dec 16 02:06:54.134000 audit: BPF prog-id=200 op=UNLOAD Dec 16 02:06:54.134000 audit[4996]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4974 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656464353736373030643135666666336561333634656165383632 Dec 16 02:06:54.134000 audit: BPF prog-id=201 op=LOAD Dec 16 02:06:54.134000 audit[4996]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4974 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656464353736373030643135666666336561333634656165383632 Dec 16 02:06:54.134000 audit: BPF prog-id=202 op=LOAD Dec 16 02:06:54.134000 audit[4996]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4974 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656464353736373030643135666666336561333634656165383632 Dec 16 02:06:54.134000 audit: BPF prog-id=202 op=UNLOAD Dec 16 02:06:54.134000 audit[4996]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4974 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656464353736373030643135666666336561333634656165383632 Dec 16 02:06:54.134000 audit: BPF prog-id=201 op=UNLOAD Dec 16 02:06:54.134000 audit[4996]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4974 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656464353736373030643135666666336561333634656165383632 Dec 16 02:06:54.134000 audit: BPF prog-id=203 op=LOAD Dec 16 02:06:54.134000 audit[4996]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4974 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656464353736373030643135666666336561333634656165383632 Dec 16 02:06:54.170546 containerd[2088]: time="2025-12-16T02:06:54.170410351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9cf5fb98-qdzq2,Uid:81a4ac18-79c3-4965-a33a-757950d44671,Namespace:calico-system,Attempt:0,} returns sandbox id \"99edd576700d15fff3ea364eae8623c3c44097d95d959f47e3d64bfb14f3177e\"" Dec 16 02:06:54.172734 containerd[2088]: time="2025-12-16T02:06:54.172681553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:06:54.348000 audit: BPF prog-id=204 op=LOAD Dec 16 02:06:54.348000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb74ff18 a2=98 a3=ffffeb74ff08 items=0 ppid=5028 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:06:54.350000 audit: BPF prog-id=204 op=UNLOAD Dec 16 02:06:54.350000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffeb74fee8 a3=0 items=0 ppid=5028 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.350000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:06:54.351000 audit: BPF prog-id=205 op=LOAD Dec 16 02:06:54.351000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb74fdc8 a2=74 a3=95 items=0 ppid=5028 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:06:54.351000 audit: BPF prog-id=205 op=UNLOAD Dec 16 02:06:54.351000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5028 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:06:54.351000 audit: BPF prog-id=206 op=LOAD Dec 16 02:06:54.351000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb74fdf8 a2=40 a3=ffffeb74fe28 items=0 ppid=5028 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:06:54.351000 audit: BPF prog-id=206 op=UNLOAD Dec 16 02:06:54.351000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffeb74fe28 items=0 ppid=5028 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:06:54.354000 audit: BPF prog-id=207 op=LOAD Dec 16 02:06:54.354000 audit[5107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff18486d8 a2=98 a3=fffff18486c8 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.355000 audit: BPF prog-id=207 op=UNLOAD Dec 16 02:06:54.355000 audit[5107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff18486a8 a3=0 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.355000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.356000 audit: BPF prog-id=208 op=LOAD Dec 16 02:06:54.356000 audit[5107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff1848368 a2=74 a3=95 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.356000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.356000 audit: BPF prog-id=208 op=UNLOAD Dec 16 02:06:54.356000 audit[5107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.356000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.356000 audit: BPF prog-id=209 op=LOAD Dec 16 02:06:54.356000 audit[5107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff18483c8 a2=94 a3=2 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.356000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.356000 audit: BPF prog-id=209 op=UNLOAD Dec 16 02:06:54.356000 audit[5107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.356000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.441868 containerd[2088]: time="2025-12-16T02:06:54.440752407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:06:54.461000 audit: BPF prog-id=210 op=LOAD Dec 16 02:06:54.461000 audit[5107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff1848388 a2=40 a3=fffff18483b8 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.461000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.461000 audit: BPF prog-id=210 op=UNLOAD Dec 16 02:06:54.461000 audit[5107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff18483b8 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.461000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.471000 audit: BPF prog-id=211 op=LOAD Dec 16 02:06:54.471000 audit[5107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff1848398 a2=94 a3=4 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.471000 audit: BPF prog-id=211 op=UNLOAD Dec 16 02:06:54.471000 audit[5107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.471000 audit: BPF prog-id=212 op=LOAD Dec 16 02:06:54.471000 audit[5107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff18481d8 a2=94 a3=5 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.471000 audit: BPF prog-id=212 op=UNLOAD Dec 16 02:06:54.471000 audit[5107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.471000 audit: BPF prog-id=213 op=LOAD Dec 16 02:06:54.471000 audit[5107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff1848408 a2=94 a3=6 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.471000 audit: BPF prog-id=213 op=UNLOAD Dec 16 02:06:54.471000 audit[5107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.471000 audit: BPF prog-id=214 op=LOAD Dec 16 02:06:54.471000 audit[5107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff1847bd8 a2=94 a3=83 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.471000 audit: BPF prog-id=215 op=LOAD Dec 16 02:06:54.471000 audit[5107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff1847998 a2=94 a3=2 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.471000 audit: BPF prog-id=215 op=UNLOAD Dec 16 02:06:54.471000 audit[5107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.471000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.472000 audit: BPF prog-id=214 op=UNLOAD Dec 16 02:06:54.472000 audit[5107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=e83d620 a3=e830b00 items=0 ppid=5028 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.472000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:06:54.479000 audit: BPF prog-id=216 op=LOAD Dec 16 02:06:54.479000 audit[5154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed8791f8 a2=98 a3=ffffed8791e8 items=0 ppid=5028 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.479000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:06:54.479000 audit: BPF prog-id=216 op=UNLOAD Dec 16 02:06:54.479000 audit[5154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffed8791c8 a3=0 items=0 ppid=5028 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.479000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:06:54.479000 audit: BPF prog-id=217 op=LOAD Dec 16 02:06:54.479000 audit[5154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed8790a8 a2=74 a3=95 items=0 ppid=5028 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.479000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:06:54.479000 audit: BPF prog-id=217 op=UNLOAD Dec 16 02:06:54.479000 audit[5154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5028 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.479000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:06:54.479000 audit: BPF prog-id=218 op=LOAD Dec 16 02:06:54.479000 audit[5154]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed8790d8 a2=40 a3=ffffed879108 items=0 ppid=5028 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.479000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:06:54.479000 audit: BPF prog-id=218 op=UNLOAD Dec 16 02:06:54.479000 audit[5154]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffed879108 items=0 ppid=5028 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:54.479000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:06:55.256171 containerd[2088]: time="2025-12-16T02:06:55.255997551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:06:55.256171 containerd[2088]: time="2025-12-16T02:06:55.256104018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:06:55.258811 kubelet[3639]: E1216 02:06:55.258756 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:06:55.259051 kubelet[3639]: E1216 02:06:55.258829 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:06:55.263842 kubelet[3639]: E1216 02:06:55.263802 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5fe0b406afc34a4baf35acad091248e2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q5xsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9cf5fb98-qdzq2_calico-system(81a4ac18-79c3-4965-a33a-757950d44671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:06:55.265973 containerd[2088]: time="2025-12-16T02:06:55.265948690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:06:55.291575 kubelet[3639]: I1216 02:06:55.291537 3639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b06eca-07d1-44eb-b416-c4ad8a434e78" path="/var/lib/kubelet/pods/66b06eca-07d1-44eb-b416-c4ad8a434e78/volumes" Dec 16 02:06:55.383000 audit: BPF prog-id=219 op=LOAD Dec 16 02:06:55.383000 audit[5183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2878758 a2=98 a3=ffffc2878748 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.383000 audit: BPF prog-id=219 op=UNLOAD Dec 16 02:06:55.383000 audit[5183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc2878728 a3=0 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=220 op=LOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2878438 a2=74 a3=95 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=220 op=UNLOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=221 op=LOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2878498 a2=94 a3=2 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=221 op=UNLOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=222 op=LOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2878318 a2=40 a3=ffffc2878348 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=222 op=UNLOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc2878348 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=223 op=LOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2878468 a2=94 a3=b7 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=223 op=UNLOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=224 op=LOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2877b18 a2=94 a3=2 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=224 op=UNLOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.384000 audit: BPF prog-id=225 op=LOAD Dec 16 02:06:55.384000 audit[5183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2877ca8 a2=94 a3=30 items=0 ppid=5028 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:06:55.387068 systemd-networkd[1667]: vxlan.calico: Link UP Dec 16 02:06:55.387000 audit: BPF prog-id=226 op=LOAD Dec 16 02:06:55.387000 audit[5186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffee8d1238 a2=98 a3=ffffee8d1228 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.387000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.387000 audit: BPF prog-id=226 op=UNLOAD Dec 16 02:06:55.387000 audit[5186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffee8d1208 a3=0 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.387000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.387071 systemd-networkd[1667]: vxlan.calico: Gained carrier Dec 16 02:06:55.388000 audit: BPF prog-id=227 op=LOAD Dec 16 02:06:55.388000 audit[5186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee8d0ec8 a2=74 a3=95 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.388000 audit: BPF prog-id=227 op=UNLOAD Dec 16 02:06:55.388000 audit[5186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.388000 audit: BPF prog-id=228 op=LOAD Dec 16 02:06:55.388000 audit[5186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee8d0f28 a2=94 a3=2 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.388000 audit: BPF prog-id=228 op=UNLOAD Dec 16 02:06:55.388000 audit[5186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.437899 systemd-networkd[1667]: cali80b2eab4999: Gained IPv6LL Dec 16 02:06:55.474000 audit: BPF prog-id=229 op=LOAD Dec 16 02:06:55.474000 audit[5186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee8d0ee8 a2=40 a3=ffffee8d0f18 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.474000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.475000 audit: BPF prog-id=229 op=UNLOAD Dec 16 02:06:55.475000 audit[5186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffee8d0f18 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.475000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.481000 audit: BPF prog-id=230 op=LOAD Dec 16 02:06:55.481000 audit[5186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee8d0ef8 a2=94 a3=4 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.481000 audit: BPF prog-id=230 op=UNLOAD Dec 16 02:06:55.481000 audit[5186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.481000 audit: BPF prog-id=231 op=LOAD Dec 16 02:06:55.481000 audit[5186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffee8d0d38 a2=94 a3=5 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.481000 audit: BPF prog-id=231 op=UNLOAD Dec 16 02:06:55.481000 audit[5186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.482000 audit: BPF prog-id=232 op=LOAD Dec 16 02:06:55.482000 audit[5186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee8d0f68 a2=94 a3=6 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.482000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.482000 audit: BPF prog-id=232 op=UNLOAD Dec 16 02:06:55.482000 audit[5186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.482000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.482000 audit: BPF prog-id=233 op=LOAD Dec 16 02:06:55.482000 audit[5186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee8d0738 a2=94 a3=83 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.482000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.483000 audit: BPF prog-id=234 op=LOAD Dec 16 02:06:55.483000 audit[5186]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffee8d04f8 a2=94 a3=2 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.483000 audit: BPF prog-id=234 op=UNLOAD Dec 16 02:06:55.483000 audit[5186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.483000 audit: BPF prog-id=233 op=UNLOAD Dec 16 02:06:55.483000 audit[5186]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=82fd620 a3=82f0b00 items=0 ppid=5028 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:06:55.495000 audit: BPF prog-id=225 op=UNLOAD Dec 16 02:06:55.495000 audit[5028]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40007b1180 a2=0 a3=0 items=0 ppid=4976 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:55.495000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 02:06:55.692320 containerd[2088]: time="2025-12-16T02:06:55.691235941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:06:56.111000 audit[5213]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=5213 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:06:56.112505 containerd[2088]: time="2025-12-16T02:06:56.112180163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:06:56.112505 containerd[2088]: time="2025-12-16T02:06:56.112250181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:06:56.112745 kubelet[3639]: E1216 02:06:56.112694 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:06:56.112745 kubelet[3639]: E1216 02:06:56.112743 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:06:56.112889 kubelet[3639]: E1216 02:06:56.112856 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5xsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9cf5fb98-qdzq2_calico-system(81a4ac18-79c3-4965-a33a-757950d44671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:06:56.111000 audit[5213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe3f974f0 a2=0 a3=ffffb4445fa8 items=0 ppid=5028 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:56.111000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:06:56.114189 kubelet[3639]: E1216 02:06:56.114135 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:06:56.114000 audit[5215]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5215 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:06:56.114000 audit[5215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff3db32e0 a2=0 a3=ffff845b3fa8 items=0 ppid=5028 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:56.114000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:06:56.121000 audit[5217]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:06:56.121000 audit[5217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd9cd2220 a2=0 a3=ffffa92d5fa8 items=0 ppid=5028 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:56.121000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:06:56.176000 audit[5214]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:06:56.176000 audit[5214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffd853cc00 a2=0 a3=ffffb7707fa8 items=0 ppid=5028 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:56.176000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:06:56.452282 kubelet[3639]: E1216 02:06:56.452154 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:06:56.486000 audit[5227]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:56.486000 audit[5227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffce650c60 a2=0 a3=1 items=0 ppid=3793 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:56.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:56.499000 audit[5227]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:06:56.499000 audit[5227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffce650c60 a2=0 a3=1 items=0 ppid=3793 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:06:56.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:06:56.717991 systemd-networkd[1667]: vxlan.calico: Gained IPv6LL Dec 16 02:07:03.290573 containerd[2088]: time="2025-12-16T02:07:03.290522460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s52v5,Uid:f469a490-fb18-4868-b583-cd075b9a892c,Namespace:calico-system,Attempt:0,}" Dec 16 02:07:03.291232 containerd[2088]: time="2025-12-16T02:07:03.290989763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-b42tq,Uid:5f7514dc-37b5-4bac-8d4e-04c87fb3f679,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:07:03.518134 systemd-networkd[1667]: caliac04b7b9496: Link UP Dec 16 02:07:03.520477 systemd-networkd[1667]: caliac04b7b9496: Gained carrier Dec 16 02:07:03.536044 containerd[2088]: 2025-12-16 02:07:03.439 [INFO][5239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0 goldmane-666569f655- calico-system f469a490-fb18-4868-b583-cd075b9a892c 862 0 2025-12-16 02:06:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-a-de7f477aa9 goldmane-666569f655-s52v5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliac04b7b9496 [] [] }} ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Namespace="calico-system" Pod="goldmane-666569f655-s52v5" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-" Dec 16 02:07:03.536044 containerd[2088]: 2025-12-16 02:07:03.440 [INFO][5239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Namespace="calico-system" Pod="goldmane-666569f655-s52v5" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" Dec 16 02:07:03.536044 containerd[2088]: 2025-12-16 02:07:03.460 [INFO][5254] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" HandleID="k8s-pod-network.7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Workload="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" Dec 16 02:07:03.536282 containerd[2088]: 2025-12-16 02:07:03.462 [INFO][5254] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" HandleID="k8s-pod-network.7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Workload="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-de7f477aa9", "pod":"goldmane-666569f655-s52v5", "timestamp":"2025-12-16 02:07:03.460305252 +0000 UTC"}, Hostname:"ci-4547.0.0-a-de7f477aa9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:07:03.536282 containerd[2088]: 2025-12-16 02:07:03.463 [INFO][5254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:07:03.536282 containerd[2088]: 2025-12-16 02:07:03.463 [INFO][5254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:07:03.536282 containerd[2088]: 2025-12-16 02:07:03.463 [INFO][5254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-de7f477aa9' Dec 16 02:07:03.536282 containerd[2088]: 2025-12-16 02:07:03.469 [INFO][5254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.536282 containerd[2088]: 2025-12-16 02:07:03.476 [INFO][5254] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.536282 containerd[2088]: 2025-12-16 02:07:03.481 [INFO][5254] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.536282 containerd[2088]: 2025-12-16 02:07:03.488 [INFO][5254] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.536282 containerd[2088]: 2025-12-16 02:07:03.490 [INFO][5254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.536426 containerd[2088]: 2025-12-16 02:07:03.490 [INFO][5254] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.536426 containerd[2088]: 2025-12-16 02:07:03.492 [INFO][5254] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc Dec 16 02:07:03.536426 containerd[2088]: 2025-12-16 02:07:03.502 [INFO][5254] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.536426 containerd[2088]: 2025-12-16 02:07:03.510 [INFO][5254] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.2/26] block=192.168.4.0/26 handle="k8s-pod-network.7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.536426 containerd[2088]: 2025-12-16 02:07:03.511 [INFO][5254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.2/26] handle="k8s-pod-network.7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.536426 containerd[2088]: 2025-12-16 02:07:03.511 [INFO][5254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:07:03.536426 containerd[2088]: 2025-12-16 02:07:03.511 [INFO][5254] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.2/26] IPv6=[] ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" HandleID="k8s-pod-network.7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Workload="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" Dec 16 02:07:03.536570 containerd[2088]: 2025-12-16 02:07:03.512 [INFO][5239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Namespace="calico-system" Pod="goldmane-666569f655-s52v5" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f469a490-fb18-4868-b583-cd075b9a892c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"", Pod:"goldmane-666569f655-s52v5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac04b7b9496", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:03.536570 containerd[2088]: 2025-12-16 02:07:03.513 [INFO][5239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.2/32] ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Namespace="calico-system" Pod="goldmane-666569f655-s52v5" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" Dec 16 02:07:03.536619 containerd[2088]: 2025-12-16 02:07:03.513 [INFO][5239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac04b7b9496 ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Namespace="calico-system" Pod="goldmane-666569f655-s52v5" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" Dec 16 02:07:03.536619 containerd[2088]: 2025-12-16 02:07:03.520 [INFO][5239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Namespace="calico-system" Pod="goldmane-666569f655-s52v5" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" Dec 16 02:07:03.536650 containerd[2088]: 2025-12-16 02:07:03.521 [INFO][5239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Namespace="calico-system" Pod="goldmane-666569f655-s52v5" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f469a490-fb18-4868-b583-cd075b9a892c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc", Pod:"goldmane-666569f655-s52v5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac04b7b9496", MAC:"4a:20:cf:c9:53:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:03.536682 containerd[2088]: 2025-12-16 02:07:03.532 [INFO][5239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" Namespace="calico-system" Pod="goldmane-666569f655-s52v5" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-goldmane--666569f655--s52v5-eth0" Dec 16 02:07:03.548000 audit[5290]: NETFILTER_CFG table=filter:128 family=2 entries=44 op=nft_register_chain pid=5290 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:03.553189 kernel: kauditd_printk_skb: 231 callbacks suppressed Dec 16 02:07:03.548000 audit[5290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffcff58680 a2=0 a3=ffff93b07fa8 items=0 ppid=5028 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:03.579765 kernel: audit: type=1325 audit(1765850823.548:684): table=filter:128 family=2 entries=44 op=nft_register_chain pid=5290 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:03.579856 kernel: audit: type=1300 audit(1765850823.548:684): arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffcff58680 a2=0 a3=ffff93b07fa8 items=0 ppid=5028 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:03.548000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:03.590780 kernel: audit: type=1327 audit(1765850823.548:684): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:03.625612 systemd-networkd[1667]: cali5bef47d04e7: Link UP Dec 16 02:07:03.626146 systemd-networkd[1667]: cali5bef47d04e7: Gained carrier Dec 16 02:07:03.657799 containerd[2088]: 2025-12-16 02:07:03.492 [INFO][5261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0 calico-apiserver-69677bd74f- calico-apiserver 5f7514dc-37b5-4bac-8d4e-04c87fb3f679 866 0 2025-12-16 02:06:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69677bd74f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-de7f477aa9 calico-apiserver-69677bd74f-b42tq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5bef47d04e7 [] [] }} ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-b42tq" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-" Dec 16 02:07:03.657799 containerd[2088]: 2025-12-16 02:07:03.492 [INFO][5261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-b42tq" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" Dec 16 02:07:03.657799 containerd[2088]: 2025-12-16 02:07:03.530 [INFO][5274] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" HandleID="k8s-pod-network.e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Workload="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" Dec 16 02:07:03.658208 containerd[2088]: 2025-12-16 02:07:03.530 [INFO][5274] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" HandleID="k8s-pod-network.e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Workload="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-de7f477aa9", "pod":"calico-apiserver-69677bd74f-b42tq", "timestamp":"2025-12-16 02:07:03.530735258 +0000 UTC"}, Hostname:"ci-4547.0.0-a-de7f477aa9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:07:03.658208 containerd[2088]: 2025-12-16 02:07:03.530 [INFO][5274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:07:03.658208 containerd[2088]: 2025-12-16 02:07:03.531 [INFO][5274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:07:03.658208 containerd[2088]: 2025-12-16 02:07:03.531 [INFO][5274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-de7f477aa9' Dec 16 02:07:03.658208 containerd[2088]: 2025-12-16 02:07:03.591 [INFO][5274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.658208 containerd[2088]: 2025-12-16 02:07:03.596 [INFO][5274] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.658208 containerd[2088]: 2025-12-16 02:07:03.599 [INFO][5274] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.658208 containerd[2088]: 2025-12-16 02:07:03.601 [INFO][5274] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.658208 containerd[2088]: 2025-12-16 02:07:03.604 [INFO][5274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.658362 containerd[2088]: 2025-12-16 02:07:03.604 [INFO][5274] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.658362 containerd[2088]: 2025-12-16 02:07:03.605 [INFO][5274] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110 Dec 16 02:07:03.658362 containerd[2088]: 2025-12-16 02:07:03.610 [INFO][5274] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.658362 containerd[2088]: 2025-12-16 02:07:03.620 [INFO][5274] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.3/26] block=192.168.4.0/26 handle="k8s-pod-network.e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.658362 containerd[2088]: 2025-12-16 02:07:03.620 [INFO][5274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.3/26] handle="k8s-pod-network.e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:03.658362 containerd[2088]: 2025-12-16 02:07:03.620 [INFO][5274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:07:03.658362 containerd[2088]: 2025-12-16 02:07:03.621 [INFO][5274] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.3/26] IPv6=[] ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" HandleID="k8s-pod-network.e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Workload="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" Dec 16 02:07:03.658448 containerd[2088]: 2025-12-16 02:07:03.623 [INFO][5261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-b42tq" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0", GenerateName:"calico-apiserver-69677bd74f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f7514dc-37b5-4bac-8d4e-04c87fb3f679", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69677bd74f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"", Pod:"calico-apiserver-69677bd74f-b42tq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bef47d04e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:03.658480 containerd[2088]: 2025-12-16 02:07:03.623 [INFO][5261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.3/32] ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-b42tq" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" Dec 16 02:07:03.658480 containerd[2088]: 2025-12-16 02:07:03.623 [INFO][5261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bef47d04e7 ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-b42tq" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" Dec 16 02:07:03.658480 containerd[2088]: 2025-12-16 02:07:03.627 [INFO][5261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-b42tq" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" Dec 16 02:07:03.658525 containerd[2088]: 2025-12-16 02:07:03.628 [INFO][5261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-b42tq" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0", GenerateName:"calico-apiserver-69677bd74f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f7514dc-37b5-4bac-8d4e-04c87fb3f679", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69677bd74f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110", Pod:"calico-apiserver-69677bd74f-b42tq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bef47d04e7", MAC:"9e:2b:94:3a:77:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:03.658555 containerd[2088]: 2025-12-16 02:07:03.655 [INFO][5261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-b42tq" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--b42tq-eth0" Dec 16 02:07:03.663000 audit[5299]: NETFILTER_CFG table=filter:129 family=2 entries=54 op=nft_register_chain pid=5299 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:03.663000 audit[5299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffdb6e9df0 a2=0 a3=ffff9fef5fa8 items=0 ppid=5028 pid=5299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:03.691489 kernel: audit: type=1325 audit(1765850823.663:685): table=filter:129 family=2 entries=54 op=nft_register_chain pid=5299 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:03.691578 kernel: audit: type=1300 audit(1765850823.663:685): arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffdb6e9df0 a2=0 a3=ffff9fef5fa8 items=0 ppid=5028 pid=5299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:03.663000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:03.703063 kernel: audit: type=1327 audit(1765850823.663:685): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:04.022259 containerd[2088]: time="2025-12-16T02:07:04.022224992Z" level=info msg="connecting to shim 7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc" address="unix:///run/containerd/s/f09464a0dde0e1f61c6c4142b1785ad31629013a9efad6b4d4ff3a164d59ada1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:07:04.045960 systemd[1]: Started cri-containerd-7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc.scope - libcontainer container 7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc. Dec 16 02:07:04.053000 audit: BPF prog-id=235 op=LOAD Dec 16 02:07:04.058000 audit: BPF prog-id=236 op=LOAD Dec 16 02:07:04.063048 kernel: audit: type=1334 audit(1765850824.053:686): prog-id=235 op=LOAD Dec 16 02:07:04.063106 kernel: audit: type=1334 audit(1765850824.058:687): prog-id=236 op=LOAD Dec 16 02:07:04.058000 audit[5318]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5307 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.079655 kernel: audit: type=1300 audit(1765850824.058:687): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5307 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383632643338306438333832626131333061653637343231363862 Dec 16 02:07:04.095668 kernel: audit: type=1327 audit(1765850824.058:687): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383632643338306438333832626131333061653637343231363862 Dec 16 02:07:04.058000 audit: BPF prog-id=236 op=UNLOAD Dec 16 02:07:04.058000 audit[5318]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5307 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383632643338306438333832626131333061653637343231363862 Dec 16 02:07:04.058000 audit: BPF prog-id=237 op=LOAD Dec 16 02:07:04.058000 audit[5318]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5307 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383632643338306438333832626131333061653637343231363862 Dec 16 02:07:04.062000 audit: BPF prog-id=238 op=LOAD Dec 16 02:07:04.062000 audit[5318]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5307 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383632643338306438333832626131333061653637343231363862 Dec 16 02:07:04.079000 audit: BPF prog-id=238 op=UNLOAD Dec 16 02:07:04.079000 audit[5318]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5307 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383632643338306438333832626131333061653637343231363862 Dec 16 02:07:04.079000 audit: BPF prog-id=237 op=UNLOAD Dec 16 02:07:04.079000 audit[5318]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5307 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383632643338306438333832626131333061653637343231363862 Dec 16 02:07:04.079000 audit: BPF prog-id=239 op=LOAD Dec 16 02:07:04.079000 audit[5318]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5307 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383632643338306438333832626131333061653637343231363862 Dec 16 02:07:04.230041 containerd[2088]: time="2025-12-16T02:07:04.229971633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s52v5,Uid:f469a490-fb18-4868-b583-cd075b9a892c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e862d380d8382ba130ae6742168bab2522c83275856118818899203241813cc\"" Dec 16 02:07:04.231323 containerd[2088]: time="2025-12-16T02:07:04.231253515Z" level=info msg="connecting to shim e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110" address="unix:///run/containerd/s/efebf3768a04072e5aadd3d05d37751625518c60ce62393c329d23287b8ab7af" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:07:04.233084 containerd[2088]: time="2025-12-16T02:07:04.233025549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:07:04.251941 systemd[1]: Started cri-containerd-e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110.scope - libcontainer container e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110. Dec 16 02:07:04.259000 audit: BPF prog-id=240 op=LOAD Dec 16 02:07:04.259000 audit: BPF prog-id=241 op=LOAD Dec 16 02:07:04.259000 audit[5364]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017a180 a2=98 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613234613236663430373132613139353365333965393037633931 Dec 16 02:07:04.259000 audit: BPF prog-id=241 op=UNLOAD Dec 16 02:07:04.259000 audit[5364]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613234613236663430373132613139353365333965393037633931 Dec 16 02:07:04.259000 audit: BPF prog-id=242 op=LOAD Dec 16 02:07:04.259000 audit[5364]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017a3e8 a2=98 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613234613236663430373132613139353365333965393037633931 Dec 16 02:07:04.259000 audit: BPF prog-id=243 op=LOAD Dec 16 02:07:04.259000 audit[5364]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017a168 a2=98 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613234613236663430373132613139353365333965393037633931 Dec 16 02:07:04.259000 audit: BPF prog-id=243 op=UNLOAD Dec 16 02:07:04.259000 audit[5364]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613234613236663430373132613139353365333965393037633931 Dec 16 02:07:04.259000 audit: BPF prog-id=242 op=UNLOAD Dec 16 02:07:04.259000 audit[5364]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613234613236663430373132613139353365333965393037633931 Dec 16 02:07:04.259000 audit: BPF prog-id=244 op=LOAD Dec 16 02:07:04.259000 audit[5364]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017a648 a2=98 a3=0 items=0 ppid=5352 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613234613236663430373132613139353365333965393037633931 Dec 16 02:07:04.313810 containerd[2088]: time="2025-12-16T02:07:04.289560656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmzsh,Uid:8d6bb703-5160-48e3-8477-a1bbde860409,Namespace:calico-system,Attempt:0,}" Dec 16 02:07:04.318301 containerd[2088]: time="2025-12-16T02:07:04.318266948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-b42tq,Uid:5f7514dc-37b5-4bac-8d4e-04c87fb3f679,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e1a24a26f40712a1953e39e907c9122ded7e7029bc12b8ad43c515ff3b3ca110\"" Dec 16 02:07:04.524669 containerd[2088]: time="2025-12-16T02:07:04.524624944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:04.535836 systemd-networkd[1667]: cali84530cf87b7: Link UP Dec 16 02:07:04.537104 systemd-networkd[1667]: cali84530cf87b7: Gained carrier Dec 16 02:07:04.557819 containerd[2088]: 2025-12-16 02:07:04.441 [INFO][5390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0 csi-node-driver- calico-system 8d6bb703-5160-48e3-8477-a1bbde860409 748 0 2025-12-16 02:06:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-a-de7f477aa9 csi-node-driver-rmzsh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali84530cf87b7 [] [] }} ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Namespace="calico-system" Pod="csi-node-driver-rmzsh" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-" Dec 16 02:07:04.557819 containerd[2088]: 2025-12-16 02:07:04.441 [INFO][5390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Namespace="calico-system" Pod="csi-node-driver-rmzsh" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" Dec 16 02:07:04.557819 containerd[2088]: 2025-12-16 02:07:04.484 [INFO][5403] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" HandleID="k8s-pod-network.ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Workload="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" Dec 16 02:07:04.557988 containerd[2088]: 2025-12-16 02:07:04.485 [INFO][5403] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" HandleID="k8s-pod-network.ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Workload="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-de7f477aa9", "pod":"csi-node-driver-rmzsh", "timestamp":"2025-12-16 02:07:04.484985497 +0000 UTC"}, Hostname:"ci-4547.0.0-a-de7f477aa9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:07:04.557988 containerd[2088]: 2025-12-16 02:07:04.485 [INFO][5403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:07:04.557988 containerd[2088]: 2025-12-16 02:07:04.485 [INFO][5403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:07:04.557988 containerd[2088]: 2025-12-16 02:07:04.485 [INFO][5403] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-de7f477aa9' Dec 16 02:07:04.557988 containerd[2088]: 2025-12-16 02:07:04.494 [INFO][5403] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:04.557988 containerd[2088]: 2025-12-16 02:07:04.498 [INFO][5403] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:04.557988 containerd[2088]: 2025-12-16 02:07:04.502 [INFO][5403] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:04.557988 containerd[2088]: 2025-12-16 02:07:04.505 [INFO][5403] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:04.557988 containerd[2088]: 2025-12-16 02:07:04.507 [INFO][5403] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:04.558177 containerd[2088]: 2025-12-16 02:07:04.507 [INFO][5403] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:04.558177 containerd[2088]: 2025-12-16 02:07:04.508 [INFO][5403] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283 Dec 16 02:07:04.558177 containerd[2088]: 2025-12-16 02:07:04.514 [INFO][5403] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:04.558177 containerd[2088]: 2025-12-16 02:07:04.531 [INFO][5403] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.4/26] block=192.168.4.0/26 handle="k8s-pod-network.ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:04.558177 containerd[2088]: 2025-12-16 02:07:04.531 [INFO][5403] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.4/26] handle="k8s-pod-network.ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:04.558177 containerd[2088]: 2025-12-16 02:07:04.531 [INFO][5403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:07:04.558177 containerd[2088]: 2025-12-16 02:07:04.531 [INFO][5403] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.4/26] IPv6=[] ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" HandleID="k8s-pod-network.ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Workload="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" Dec 16 02:07:04.558297 containerd[2088]: 2025-12-16 02:07:04.533 [INFO][5390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Namespace="calico-system" Pod="csi-node-driver-rmzsh" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d6bb703-5160-48e3-8477-a1bbde860409", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"", Pod:"csi-node-driver-rmzsh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84530cf87b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:04.558367 containerd[2088]: 2025-12-16 02:07:04.533 [INFO][5390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.4/32] ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Namespace="calico-system" Pod="csi-node-driver-rmzsh" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" Dec 16 02:07:04.558367 containerd[2088]: 2025-12-16 02:07:04.533 [INFO][5390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84530cf87b7 ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Namespace="calico-system" Pod="csi-node-driver-rmzsh" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" Dec 16 02:07:04.558367 containerd[2088]: 2025-12-16 02:07:04.537 [INFO][5390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Namespace="calico-system" Pod="csi-node-driver-rmzsh" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" Dec 16 02:07:04.558480 containerd[2088]: 2025-12-16 02:07:04.538 [INFO][5390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Namespace="calico-system" Pod="csi-node-driver-rmzsh" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d6bb703-5160-48e3-8477-a1bbde860409", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283", Pod:"csi-node-driver-rmzsh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84530cf87b7", MAC:"ee:28:f2:b1:17:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:04.558540 containerd[2088]: 2025-12-16 02:07:04.555 [INFO][5390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" Namespace="calico-system" Pod="csi-node-driver-rmzsh" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-csi--node--driver--rmzsh-eth0" Dec 16 02:07:04.568000 audit[5417]: NETFILTER_CFG table=filter:130 family=2 entries=50 op=nft_register_chain pid=5417 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:04.568000 audit[5417]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24804 a0=3 a1=ffffc8d0dfa0 a2=0 a3=ffffbe357fa8 items=0 ppid=5028 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:04.568000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:04.782059 systemd-networkd[1667]: cali5bef47d04e7: Gained IPv6LL Dec 16 02:07:05.290707 containerd[2088]: time="2025-12-16T02:07:05.290649591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vsht2,Uid:7c04891e-a2dd-497f-9457-bad48f5fedbe,Namespace:kube-system,Attempt:0,}" Dec 16 02:07:05.290899 containerd[2088]: time="2025-12-16T02:07:05.290743490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-gv2ff,Uid:e320a820-257c-48b1-85de-c1dd7b465c9a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:07:05.293895 systemd-networkd[1667]: caliac04b7b9496: Gained IPv6LL Dec 16 02:07:05.741947 systemd-networkd[1667]: cali84530cf87b7: Gained IPv6LL Dec 16 02:07:06.289870 containerd[2088]: time="2025-12-16T02:07:06.289831281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dpg5v,Uid:ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1,Namespace:kube-system,Attempt:0,}" Dec 16 02:07:06.290256 containerd[2088]: time="2025-12-16T02:07:06.289915515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549bfc7bd9-vst6l,Uid:222ba326-59d2-4676-b30c-82b655f93a5f,Namespace:calico-system,Attempt:0,}" Dec 16 02:07:06.518246 containerd[2088]: time="2025-12-16T02:07:06.518123004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:07:06.518246 containerd[2088]: time="2025-12-16T02:07:06.518205287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:06.518390 kubelet[3639]: E1216 02:07:06.518326 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:07:06.518390 kubelet[3639]: E1216 02:07:06.518368 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:07:06.518777 kubelet[3639]: E1216 02:07:06.518554 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m25gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s52v5_calico-system(f469a490-fb18-4868-b583-cd075b9a892c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:06.519570 containerd[2088]: time="2025-12-16T02:07:06.519105620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:07:06.520454 kubelet[3639]: E1216 02:07:06.520367 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:07:07.480612 kubelet[3639]: E1216 02:07:07.480526 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:07:07.506000 audit[5425]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:07.506000 audit[5425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe6c91b50 a2=0 a3=1 items=0 ppid=3793 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:07.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:07.510000 audit[5425]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:07.510000 audit[5425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe6c91b50 a2=0 a3=1 items=0 ppid=3793 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:07.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:08.596256 containerd[2088]: time="2025-12-16T02:07:08.596200970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:08.801587 systemd-networkd[1667]: cali44b50dc30f7: Link UP Dec 16 02:07:08.802526 systemd-networkd[1667]: cali44b50dc30f7: Gained carrier Dec 16 02:07:08.829071 containerd[2088]: 2025-12-16 02:07:08.740 [INFO][5429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0 calico-apiserver-69677bd74f- calico-apiserver e320a820-257c-48b1-85de-c1dd7b465c9a 864 0 2025-12-16 02:06:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69677bd74f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-de7f477aa9 calico-apiserver-69677bd74f-gv2ff eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali44b50dc30f7 [] [] }} ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-gv2ff" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-" Dec 16 02:07:08.829071 containerd[2088]: 2025-12-16 02:07:08.740 [INFO][5429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-gv2ff" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" Dec 16 02:07:08.829071 containerd[2088]: 2025-12-16 02:07:08.757 [INFO][5443] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" HandleID="k8s-pod-network.72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Workload="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" Dec 16 02:07:08.829251 containerd[2088]: 2025-12-16 02:07:08.757 [INFO][5443] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" HandleID="k8s-pod-network.72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Workload="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-de7f477aa9", "pod":"calico-apiserver-69677bd74f-gv2ff", "timestamp":"2025-12-16 02:07:08.757820879 +0000 UTC"}, Hostname:"ci-4547.0.0-a-de7f477aa9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:07:08.829251 containerd[2088]: 2025-12-16 02:07:08.757 [INFO][5443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:07:08.829251 containerd[2088]: 2025-12-16 02:07:08.758 [INFO][5443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:07:08.829251 containerd[2088]: 2025-12-16 02:07:08.758 [INFO][5443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-de7f477aa9' Dec 16 02:07:08.829251 containerd[2088]: 2025-12-16 02:07:08.764 [INFO][5443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.829251 containerd[2088]: 2025-12-16 02:07:08.768 [INFO][5443] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.829251 containerd[2088]: 2025-12-16 02:07:08.772 [INFO][5443] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.829251 containerd[2088]: 2025-12-16 02:07:08.773 [INFO][5443] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.829251 containerd[2088]: 2025-12-16 02:07:08.775 [INFO][5443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.829733 containerd[2088]: 2025-12-16 02:07:08.775 [INFO][5443] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.829733 containerd[2088]: 2025-12-16 02:07:08.777 [INFO][5443] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e Dec 16 02:07:08.829733 containerd[2088]: 2025-12-16 02:07:08.784 [INFO][5443] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.829733 containerd[2088]: 2025-12-16 02:07:08.796 [INFO][5443] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.5/26] block=192.168.4.0/26 handle="k8s-pod-network.72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.829733 containerd[2088]: 2025-12-16 02:07:08.796 [INFO][5443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.5/26] handle="k8s-pod-network.72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.829733 containerd[2088]: 2025-12-16 02:07:08.796 [INFO][5443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:07:08.829733 containerd[2088]: 2025-12-16 02:07:08.796 [INFO][5443] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.5/26] IPv6=[] ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" HandleID="k8s-pod-network.72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Workload="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" Dec 16 02:07:08.829862 containerd[2088]: 2025-12-16 02:07:08.798 [INFO][5429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-gv2ff" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0", GenerateName:"calico-apiserver-69677bd74f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e320a820-257c-48b1-85de-c1dd7b465c9a", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69677bd74f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"", Pod:"calico-apiserver-69677bd74f-gv2ff", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44b50dc30f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:08.829903 containerd[2088]: 2025-12-16 02:07:08.798 [INFO][5429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.5/32] ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-gv2ff" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" Dec 16 02:07:08.829903 containerd[2088]: 2025-12-16 02:07:08.798 [INFO][5429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44b50dc30f7 ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-gv2ff" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" Dec 16 02:07:08.829903 containerd[2088]: 2025-12-16 02:07:08.803 [INFO][5429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-gv2ff" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" Dec 16 02:07:08.829951 containerd[2088]: 2025-12-16 02:07:08.803 [INFO][5429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-gv2ff" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0", GenerateName:"calico-apiserver-69677bd74f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e320a820-257c-48b1-85de-c1dd7b465c9a", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69677bd74f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e", Pod:"calico-apiserver-69677bd74f-gv2ff", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44b50dc30f7", MAC:"7a:43:46:86:0e:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:08.829984 containerd[2088]: 2025-12-16 02:07:08.823 [INFO][5429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" Namespace="calico-apiserver" Pod="calico-apiserver-69677bd74f-gv2ff" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--apiserver--69677bd74f--gv2ff-eth0" Dec 16 02:07:08.840000 audit[5472]: NETFILTER_CFG table=filter:133 family=2 entries=45 op=nft_register_chain pid=5472 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:08.844726 kernel: kauditd_printk_skb: 49 callbacks suppressed Dec 16 02:07:08.844780 kernel: audit: type=1325 audit(1765850828.840:705): table=filter:133 family=2 entries=45 op=nft_register_chain pid=5472 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:08.840000 audit[5472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24248 a0=3 a1=ffffea1c4440 a2=0 a3=ffffa0526fa8 items=0 ppid=5028 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:08.875390 kernel: audit: type=1300 audit(1765850828.840:705): arch=c00000b7 syscall=211 success=yes exit=24248 a0=3 a1=ffffea1c4440 a2=0 a3=ffffa0526fa8 items=0 ppid=5028 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:08.876983 containerd[2088]: time="2025-12-16T02:07:08.876931535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:07:08.840000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:08.877352 containerd[2088]: time="2025-12-16T02:07:08.877317332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:08.877804 kubelet[3639]: E1216 02:07:08.877577 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:08.877804 kubelet[3639]: E1216 02:07:08.877617 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:08.879055 kubelet[3639]: E1216 02:07:08.878260 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7pm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69677bd74f-b42tq_calico-apiserver(5f7514dc-37b5-4bac-8d4e-04c87fb3f679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:08.879428 kubelet[3639]: E1216 02:07:08.879364 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:07:08.880911 containerd[2088]: time="2025-12-16T02:07:08.880867567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:07:08.889877 kernel: audit: type=1327 audit(1765850828.840:705): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:08.948997 systemd-networkd[1667]: calic4facdea175: Link UP Dec 16 02:07:08.950270 systemd-networkd[1667]: calic4facdea175: Gained carrier Dec 16 02:07:08.972892 containerd[2088]: 2025-12-16 02:07:08.839 [INFO][5453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0 coredns-674b8bbfcf- kube-system 7c04891e-a2dd-497f-9457-bad48f5fedbe 860 0 2025-12-16 02:06:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-de7f477aa9 coredns-674b8bbfcf-vsht2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic4facdea175 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Namespace="kube-system" Pod="coredns-674b8bbfcf-vsht2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-" Dec 16 02:07:08.972892 containerd[2088]: 2025-12-16 02:07:08.839 [INFO][5453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Namespace="kube-system" Pod="coredns-674b8bbfcf-vsht2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" Dec 16 02:07:08.972892 containerd[2088]: 2025-12-16 02:07:08.905 [INFO][5474] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" HandleID="k8s-pod-network.62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Workload="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" Dec 16 02:07:08.973044 containerd[2088]: 2025-12-16 02:07:08.905 [INFO][5474] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" HandleID="k8s-pod-network.62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Workload="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-de7f477aa9", "pod":"coredns-674b8bbfcf-vsht2", "timestamp":"2025-12-16 02:07:08.905411229 +0000 UTC"}, Hostname:"ci-4547.0.0-a-de7f477aa9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:07:08.973044 containerd[2088]: 2025-12-16 02:07:08.905 [INFO][5474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:07:08.973044 containerd[2088]: 2025-12-16 02:07:08.905 [INFO][5474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:07:08.973044 containerd[2088]: 2025-12-16 02:07:08.905 [INFO][5474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-de7f477aa9' Dec 16 02:07:08.973044 containerd[2088]: 2025-12-16 02:07:08.911 [INFO][5474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.973044 containerd[2088]: 2025-12-16 02:07:08.914 [INFO][5474] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.973044 containerd[2088]: 2025-12-16 02:07:08.918 [INFO][5474] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.973044 containerd[2088]: 2025-12-16 02:07:08.920 [INFO][5474] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.973044 containerd[2088]: 2025-12-16 02:07:08.923 [INFO][5474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.973211 containerd[2088]: 2025-12-16 02:07:08.923 [INFO][5474] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.973211 containerd[2088]: 2025-12-16 02:07:08.925 [INFO][5474] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83 Dec 16 02:07:08.973211 containerd[2088]: 2025-12-16 02:07:08.930 [INFO][5474] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.973211 containerd[2088]: 2025-12-16 02:07:08.940 [INFO][5474] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.6/26] block=192.168.4.0/26 handle="k8s-pod-network.62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.973211 containerd[2088]: 2025-12-16 02:07:08.940 [INFO][5474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.6/26] handle="k8s-pod-network.62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:08.973211 containerd[2088]: 2025-12-16 02:07:08.940 [INFO][5474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:07:08.973211 containerd[2088]: 2025-12-16 02:07:08.940 [INFO][5474] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.6/26] IPv6=[] ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" HandleID="k8s-pod-network.62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Workload="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" Dec 16 02:07:08.973310 containerd[2088]: 2025-12-16 02:07:08.944 [INFO][5453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Namespace="kube-system" Pod="coredns-674b8bbfcf-vsht2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c04891e-a2dd-497f-9457-bad48f5fedbe", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"", Pod:"coredns-674b8bbfcf-vsht2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4facdea175", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:08.973310 containerd[2088]: 2025-12-16 02:07:08.944 [INFO][5453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.6/32] ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Namespace="kube-system" Pod="coredns-674b8bbfcf-vsht2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" Dec 16 02:07:08.973310 containerd[2088]: 2025-12-16 02:07:08.944 [INFO][5453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4facdea175 ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Namespace="kube-system" Pod="coredns-674b8bbfcf-vsht2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" Dec 16 02:07:08.973310 containerd[2088]: 2025-12-16 02:07:08.950 [INFO][5453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Namespace="kube-system" Pod="coredns-674b8bbfcf-vsht2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" Dec 16 02:07:08.973310 containerd[2088]: 2025-12-16 02:07:08.951 [INFO][5453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Namespace="kube-system" Pod="coredns-674b8bbfcf-vsht2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7c04891e-a2dd-497f-9457-bad48f5fedbe", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83", Pod:"coredns-674b8bbfcf-vsht2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4facdea175", MAC:"5a:c9:9f:14:38:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:08.973310 containerd[2088]: 2025-12-16 02:07:08.967 [INFO][5453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" Namespace="kube-system" Pod="coredns-674b8bbfcf-vsht2" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--vsht2-eth0" Dec 16 02:07:09.000000 audit[5520]: NETFILTER_CFG table=filter:134 family=2 entries=54 op=nft_register_chain pid=5520 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:09.000000 audit[5520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26100 a0=3 a1=ffffed8dea30 a2=0 a3=ffff99ad6fa8 items=0 ppid=5028 pid=5520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.032257 kernel: audit: type=1325 audit(1765850829.000:706): table=filter:134 family=2 entries=54 op=nft_register_chain pid=5520 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:09.032359 kernel: audit: type=1300 audit(1765850829.000:706): arch=c00000b7 syscall=211 success=yes exit=26100 a0=3 a1=ffffed8dea30 a2=0 a3=ffff99ad6fa8 items=0 ppid=5028 pid=5520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.000000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:09.044216 kernel: audit: type=1327 audit(1765850829.000:706): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:09.077859 containerd[2088]: time="2025-12-16T02:07:09.077822753Z" level=info msg="connecting to shim ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283" address="unix:///run/containerd/s/07128d96a5e5e6fdb69fde28e73db530f51cd4506fab04b4d6dd64e1b7335703" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:07:09.104956 systemd[1]: Started cri-containerd-ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283.scope - libcontainer container ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283. Dec 16 02:07:09.121162 systemd-networkd[1667]: calif32accc6d0c: Link UP Dec 16 02:07:09.120000 audit: BPF prog-id=245 op=LOAD Dec 16 02:07:09.122883 systemd-networkd[1667]: calif32accc6d0c: Gained carrier Dec 16 02:07:09.125000 audit: BPF prog-id=246 op=LOAD Dec 16 02:07:09.134585 kernel: audit: type=1334 audit(1765850829.120:707): prog-id=245 op=LOAD Dec 16 02:07:09.134674 kernel: audit: type=1334 audit(1765850829.125:708): prog-id=246 op=LOAD Dec 16 02:07:09.134709 kernel: audit: type=1300 audit(1765850829.125:708): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.125000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:08.967 [INFO][5481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0 calico-kube-controllers-549bfc7bd9- calico-system 222ba326-59d2-4676-b30c-82b655f93a5f 861 0 2025-12-16 02:06:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:549bfc7bd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-a-de7f477aa9 calico-kube-controllers-549bfc7bd9-vst6l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif32accc6d0c [] [] }} ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Namespace="calico-system" Pod="calico-kube-controllers-549bfc7bd9-vst6l" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:08.968 [INFO][5481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Namespace="calico-system" Pod="calico-kube-controllers-549bfc7bd9-vst6l" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.050 [INFO][5508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" HandleID="k8s-pod-network.e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Workload="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.050 [INFO][5508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" HandleID="k8s-pod-network.e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Workload="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-de7f477aa9", "pod":"calico-kube-controllers-549bfc7bd9-vst6l", "timestamp":"2025-12-16 02:07:09.050658494 +0000 UTC"}, Hostname:"ci-4547.0.0-a-de7f477aa9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.050 [INFO][5508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.050 [INFO][5508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.050 [INFO][5508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-de7f477aa9' Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.058 [INFO][5508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.074 [INFO][5508] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.078 [INFO][5508] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.080 [INFO][5508] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.083 [INFO][5508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.083 [INFO][5508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.085 [INFO][5508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25 Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.093 [INFO][5508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.103 [INFO][5508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.7/26] block=192.168.4.0/26 handle="k8s-pod-network.e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.103 [INFO][5508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.7/26] handle="k8s-pod-network.e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.103 [INFO][5508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:07:09.152261 containerd[2088]: 2025-12-16 02:07:09.104 [INFO][5508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.7/26] IPv6=[] ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" HandleID="k8s-pod-network.e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Workload="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" Dec 16 02:07:09.152662 containerd[2088]: 2025-12-16 02:07:09.109 [INFO][5481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Namespace="calico-system" Pod="calico-kube-controllers-549bfc7bd9-vst6l" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0", GenerateName:"calico-kube-controllers-549bfc7bd9-", Namespace:"calico-system", SelfLink:"", UID:"222ba326-59d2-4676-b30c-82b655f93a5f", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549bfc7bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"", Pod:"calico-kube-controllers-549bfc7bd9-vst6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif32accc6d0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:09.152662 containerd[2088]: 2025-12-16 02:07:09.109 [INFO][5481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.7/32] ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Namespace="calico-system" Pod="calico-kube-controllers-549bfc7bd9-vst6l" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" Dec 16 02:07:09.152662 containerd[2088]: 2025-12-16 02:07:09.109 [INFO][5481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif32accc6d0c ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Namespace="calico-system" Pod="calico-kube-controllers-549bfc7bd9-vst6l" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" Dec 16 02:07:09.152662 containerd[2088]: 2025-12-16 02:07:09.123 [INFO][5481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Namespace="calico-system" Pod="calico-kube-controllers-549bfc7bd9-vst6l" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" Dec 16 02:07:09.152662 containerd[2088]: 2025-12-16 02:07:09.127 [INFO][5481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Namespace="calico-system" Pod="calico-kube-controllers-549bfc7bd9-vst6l" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0", GenerateName:"calico-kube-controllers-549bfc7bd9-", Namespace:"calico-system", SelfLink:"", UID:"222ba326-59d2-4676-b30c-82b655f93a5f", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549bfc7bd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25", Pod:"calico-kube-controllers-549bfc7bd9-vst6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif32accc6d0c", MAC:"4a:a5:11:59:56:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:09.152662 containerd[2088]: 2025-12-16 02:07:09.150 [INFO][5481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" Namespace="calico-system" Pod="calico-kube-controllers-549bfc7bd9-vst6l" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-calico--kube--controllers--549bfc7bd9--vst6l-eth0" Dec 16 02:07:09.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613830373462633337643765303132666663353631373932616332 Dec 16 02:07:09.168462 kernel: audit: type=1327 audit(1765850829.125:708): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613830373462633337643765303132666663353631373932616332 Dec 16 02:07:09.125000 audit: BPF prog-id=246 op=UNLOAD Dec 16 02:07:09.125000 audit[5549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613830373462633337643765303132666663353631373932616332 Dec 16 02:07:09.125000 audit: BPF prog-id=247 op=LOAD Dec 16 02:07:09.125000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613830373462633337643765303132666663353631373932616332 Dec 16 02:07:09.148000 audit: BPF prog-id=248 op=LOAD Dec 16 02:07:09.148000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613830373462633337643765303132666663353631373932616332 Dec 16 02:07:09.148000 audit: BPF prog-id=248 op=UNLOAD Dec 16 02:07:09.148000 audit[5549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613830373462633337643765303132666663353631373932616332 Dec 16 02:07:09.148000 audit: BPF prog-id=247 op=UNLOAD Dec 16 02:07:09.148000 audit[5549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613830373462633337643765303132666663353631373932616332 Dec 16 02:07:09.148000 audit: BPF prog-id=249 op=LOAD Dec 16 02:07:09.148000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5537 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613830373462633337643765303132666663353631373932616332 Dec 16 02:07:09.162000 audit[5575]: NETFILTER_CFG table=filter:135 family=2 entries=52 op=nft_register_chain pid=5575 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:09.162000 audit[5575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffd9fb8bd0 a2=0 a3=ffff8e764fa8 items=0 ppid=5028 pid=5575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.162000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:09.212883 systemd-networkd[1667]: cali1ad0d13dafd: Link UP Dec 16 02:07:09.213459 systemd-networkd[1667]: cali1ad0d13dafd: Gained carrier Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.054 [INFO][5494] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0 coredns-674b8bbfcf- kube-system ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1 859 0 2025-12-16 02:06:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-de7f477aa9 coredns-674b8bbfcf-dpg5v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ad0d13dafd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Namespace="kube-system" Pod="coredns-674b8bbfcf-dpg5v" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.054 [INFO][5494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Namespace="kube-system" Pod="coredns-674b8bbfcf-dpg5v" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.094 [INFO][5524] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" HandleID="k8s-pod-network.bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Workload="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.094 [INFO][5524] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" HandleID="k8s-pod-network.bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Workload="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-de7f477aa9", "pod":"coredns-674b8bbfcf-dpg5v", "timestamp":"2025-12-16 02:07:09.094012207 +0000 UTC"}, Hostname:"ci-4547.0.0-a-de7f477aa9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.094 [INFO][5524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.103 [INFO][5524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.104 [INFO][5524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-de7f477aa9' Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.171 [INFO][5524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.178 [INFO][5524] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.184 [INFO][5524] ipam/ipam.go 511: Trying affinity for 192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.187 [INFO][5524] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.189 [INFO][5524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.189 [INFO][5524] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.190 [INFO][5524] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171 Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.198 [INFO][5524] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.207 [INFO][5524] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.8/26] block=192.168.4.0/26 handle="k8s-pod-network.bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.207 [INFO][5524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.8/26] handle="k8s-pod-network.bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" host="ci-4547.0.0-a-de7f477aa9" Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.208 [INFO][5524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:07:09.233656 containerd[2088]: 2025-12-16 02:07:09.208 [INFO][5524] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.8/26] IPv6=[] ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" HandleID="k8s-pod-network.bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Workload="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" Dec 16 02:07:09.234352 containerd[2088]: 2025-12-16 02:07:09.209 [INFO][5494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Namespace="kube-system" Pod="coredns-674b8bbfcf-dpg5v" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"", Pod:"coredns-674b8bbfcf-dpg5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ad0d13dafd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:09.234352 containerd[2088]: 2025-12-16 02:07:09.210 [INFO][5494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.8/32] ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Namespace="kube-system" Pod="coredns-674b8bbfcf-dpg5v" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" Dec 16 02:07:09.234352 containerd[2088]: 2025-12-16 02:07:09.210 [INFO][5494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ad0d13dafd ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Namespace="kube-system" Pod="coredns-674b8bbfcf-dpg5v" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" Dec 16 02:07:09.234352 containerd[2088]: 2025-12-16 02:07:09.212 [INFO][5494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Namespace="kube-system" Pod="coredns-674b8bbfcf-dpg5v" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" Dec 16 02:07:09.234352 containerd[2088]: 2025-12-16 02:07:09.213 [INFO][5494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Namespace="kube-system" Pod="coredns-674b8bbfcf-dpg5v" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 6, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-de7f477aa9", ContainerID:"bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171", Pod:"coredns-674b8bbfcf-dpg5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ad0d13dafd", MAC:"8a:b7:99:2c:be:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:07:09.234352 containerd[2088]: 2025-12-16 02:07:09.230 [INFO][5494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" Namespace="kube-system" Pod="coredns-674b8bbfcf-dpg5v" WorkloadEndpoint="ci--4547.0.0--a--de7f477aa9-k8s-coredns--674b8bbfcf--dpg5v-eth0" Dec 16 02:07:09.241000 audit[5594]: NETFILTER_CFG table=filter:136 family=2 entries=52 op=nft_register_chain pid=5594 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:07:09.241000 audit[5594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23892 a0=3 a1=ffffe3b7ca40 a2=0 a3=ffff8c1a6fa8 items=0 ppid=5028 pid=5594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.241000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:07:09.368168 containerd[2088]: time="2025-12-16T02:07:09.368114773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:09.412883 containerd[2088]: time="2025-12-16T02:07:09.412710366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmzsh,Uid:8d6bb703-5160-48e3-8477-a1bbde860409,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffa8074bc37d7e012ffc561792ac20d1c236e5b9e7f0cdc80b00ecff7be3a283\"" Dec 16 02:07:09.484711 kubelet[3639]: E1216 02:07:09.484145 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:07:09.512000 audit[5596]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=5596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:09.512000 audit[5596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd8b0f770 a2=0 a3=1 items=0 ppid=3793 pid=5596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.512000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:09.518000 audit[5596]: NETFILTER_CFG table=nat:138 family=2 entries=14 op=nft_register_rule pid=5596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:09.518000 audit[5596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd8b0f770 a2=0 a3=1 items=0 ppid=3793 pid=5596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:09.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:10.169825 containerd[2088]: time="2025-12-16T02:07:10.169653210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:07:10.169825 containerd[2088]: time="2025-12-16T02:07:10.169757678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:10.170207 kubelet[3639]: E1216 02:07:10.169918 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:07:10.170207 kubelet[3639]: E1216 02:07:10.169963 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:07:10.170207 kubelet[3639]: E1216 02:07:10.170131 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5fe0b406afc34a4baf35acad091248e2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q5xsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9cf5fb98-qdzq2_calico-system(81a4ac18-79c3-4965-a33a-757950d44671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:10.170932 containerd[2088]: time="2025-12-16T02:07:10.170607425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:07:10.285978 systemd-networkd[1667]: cali1ad0d13dafd: Gained IPv6LL Dec 16 02:07:10.606360 systemd-networkd[1667]: cali44b50dc30f7: Gained IPv6LL Dec 16 02:07:10.606942 systemd-networkd[1667]: calic4facdea175: Gained IPv6LL Dec 16 02:07:10.861965 systemd-networkd[1667]: calif32accc6d0c: Gained IPv6LL Dec 16 02:07:11.068988 containerd[2088]: time="2025-12-16T02:07:11.068939226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:11.130261 containerd[2088]: time="2025-12-16T02:07:11.129938921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:07:11.130261 containerd[2088]: time="2025-12-16T02:07:11.130035212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:11.130380 kubelet[3639]: E1216 02:07:11.130218 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:07:11.130380 kubelet[3639]: E1216 02:07:11.130260 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:07:11.130618 kubelet[3639]: E1216 02:07:11.130434 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw58z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:11.131223 containerd[2088]: time="2025-12-16T02:07:11.130905569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:07:11.270029 containerd[2088]: time="2025-12-16T02:07:11.269986225Z" level=info msg="connecting to shim 72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e" address="unix:///run/containerd/s/76943ec5a2555ee15c6dc96ab2a3a8b7491f81a68fa6a165231b7f2230e1ee28" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:07:11.288947 systemd[1]: Started cri-containerd-72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e.scope - libcontainer container 72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e. Dec 16 02:07:11.295000 audit: BPF prog-id=250 op=LOAD Dec 16 02:07:11.296000 audit: BPF prog-id=251 op=LOAD Dec 16 02:07:11.296000 audit[5616]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5605 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732373136623033316435636366373530366138323831633632333836 Dec 16 02:07:11.296000 audit: BPF prog-id=251 op=UNLOAD Dec 16 02:07:11.296000 audit[5616]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5605 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732373136623033316435636366373530366138323831633632333836 Dec 16 02:07:11.296000 audit: BPF prog-id=252 op=LOAD Dec 16 02:07:11.296000 audit[5616]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5605 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732373136623033316435636366373530366138323831633632333836 Dec 16 02:07:11.296000 audit: BPF prog-id=253 op=LOAD Dec 16 02:07:11.296000 audit[5616]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5605 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732373136623033316435636366373530366138323831633632333836 Dec 16 02:07:11.296000 audit: BPF prog-id=253 op=UNLOAD Dec 16 02:07:11.296000 audit[5616]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5605 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732373136623033316435636366373530366138323831633632333836 Dec 16 02:07:11.296000 audit: BPF prog-id=252 op=UNLOAD Dec 16 02:07:11.296000 audit[5616]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5605 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732373136623033316435636366373530366138323831633632333836 Dec 16 02:07:11.296000 audit: BPF prog-id=254 op=LOAD Dec 16 02:07:11.296000 audit[5616]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5605 pid=5616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732373136623033316435636366373530366138323831633632333836 Dec 16 02:07:11.320024 containerd[2088]: time="2025-12-16T02:07:11.319923865Z" level=info msg="connecting to shim e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25" address="unix:///run/containerd/s/bd622f331d900f85caad3efe40a556c28e588c07a5d965470b9c40e9c12ef45d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:07:11.345965 systemd[1]: Started cri-containerd-e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25.scope - libcontainer container e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25. Dec 16 02:07:11.352000 audit: BPF prog-id=255 op=LOAD Dec 16 02:07:11.353000 audit: BPF prog-id=256 op=LOAD Dec 16 02:07:11.353000 audit[5659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633035316562346163646664643233623366306663346535643362 Dec 16 02:07:11.353000 audit: BPF prog-id=256 op=UNLOAD Dec 16 02:07:11.353000 audit[5659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633035316562346163646664643233623366306663346535643362 Dec 16 02:07:11.353000 audit: BPF prog-id=257 op=LOAD Dec 16 02:07:11.353000 audit[5659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633035316562346163646664643233623366306663346535643362 Dec 16 02:07:11.353000 audit: BPF prog-id=258 op=LOAD Dec 16 02:07:11.353000 audit[5659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633035316562346163646664643233623366306663346535643362 Dec 16 02:07:11.354000 audit: BPF prog-id=258 op=UNLOAD Dec 16 02:07:11.354000 audit[5659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633035316562346163646664643233623366306663346535643362 Dec 16 02:07:11.354000 audit: BPF prog-id=257 op=UNLOAD Dec 16 02:07:11.354000 audit[5659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633035316562346163646664643233623366306663346535643362 Dec 16 02:07:11.354000 audit: BPF prog-id=259 op=LOAD Dec 16 02:07:11.354000 audit[5659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5648 pid=5659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537633035316562346163646664643233623366306663346535643362 Dec 16 02:07:11.371318 containerd[2088]: time="2025-12-16T02:07:11.370973124Z" level=info msg="connecting to shim 62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83" address="unix:///run/containerd/s/f948d105114d0e2590d61458c8555460e6b117cc566cad4622c9b481ceb81b68" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:07:11.398955 systemd[1]: Started cri-containerd-62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83.scope - libcontainer container 62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83. Dec 16 02:07:11.405000 audit: BPF prog-id=260 op=LOAD Dec 16 02:07:11.406000 audit: BPF prog-id=261 op=LOAD Dec 16 02:07:11.406000 audit[5701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5688 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632636664663533376535346431323966303666373238363564653262 Dec 16 02:07:11.406000 audit: BPF prog-id=261 op=UNLOAD Dec 16 02:07:11.406000 audit[5701]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5688 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632636664663533376535346431323966303666373238363564653262 Dec 16 02:07:11.406000 audit: BPF prog-id=262 op=LOAD Dec 16 02:07:11.406000 audit[5701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5688 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632636664663533376535346431323966303666373238363564653262 Dec 16 02:07:11.406000 audit: BPF prog-id=263 op=LOAD Dec 16 02:07:11.406000 audit[5701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5688 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632636664663533376535346431323966303666373238363564653262 Dec 16 02:07:11.406000 audit: BPF prog-id=263 op=UNLOAD Dec 16 02:07:11.406000 audit[5701]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5688 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632636664663533376535346431323966303666373238363564653262 Dec 16 02:07:11.406000 audit: BPF prog-id=262 op=UNLOAD Dec 16 02:07:11.406000 audit[5701]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5688 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632636664663533376535346431323966303666373238363564653262 Dec 16 02:07:11.406000 audit: BPF prog-id=264 op=LOAD Dec 16 02:07:11.406000 audit[5701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5688 pid=5701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632636664663533376535346431323966303666373238363564653262 Dec 16 02:07:11.522189 containerd[2088]: time="2025-12-16T02:07:11.522139302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69677bd74f-gv2ff,Uid:e320a820-257c-48b1-85de-c1dd7b465c9a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"72716b031d5ccf7506a8281c62386d535e1d174c3005cfa53c1d0029d144ea1e\"" Dec 16 02:07:11.565512 containerd[2088]: time="2025-12-16T02:07:11.565482303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549bfc7bd9-vst6l,Uid:222ba326-59d2-4676-b30c-82b655f93a5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7c051eb4acdfdd23b3f0fc4e5d3b1ba98c5cc15647bfd28b7450518a242ce25\"" Dec 16 02:07:11.612411 containerd[2088]: time="2025-12-16T02:07:11.612371771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vsht2,Uid:7c04891e-a2dd-497f-9457-bad48f5fedbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83\"" Dec 16 02:07:11.663695 containerd[2088]: time="2025-12-16T02:07:11.663369628Z" level=info msg="CreateContainer within sandbox \"62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:07:11.668321 containerd[2088]: time="2025-12-16T02:07:11.668294149Z" level=info msg="connecting to shim bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171" address="unix:///run/containerd/s/89cff75abbfb1fff56c6d757729b7d6130c0977d8cf8f6b7bb434678dcd2b525" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:07:11.686946 systemd[1]: Started cri-containerd-bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171.scope - libcontainer container bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171. Dec 16 02:07:11.690528 containerd[2088]: time="2025-12-16T02:07:11.690458205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:11.694000 audit: BPF prog-id=265 op=LOAD Dec 16 02:07:11.694000 audit: BPF prog-id=266 op=LOAD Dec 16 02:07:11.694000 audit[5752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5740 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263303431363630663835326466346466333432363735653630653938 Dec 16 02:07:11.694000 audit: BPF prog-id=266 op=UNLOAD Dec 16 02:07:11.694000 audit[5752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5740 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263303431363630663835326466346466333432363735653630653938 Dec 16 02:07:11.694000 audit: BPF prog-id=267 op=LOAD Dec 16 02:07:11.694000 audit[5752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5740 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263303431363630663835326466346466333432363735653630653938 Dec 16 02:07:11.694000 audit: BPF prog-id=268 op=LOAD Dec 16 02:07:11.694000 audit[5752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5740 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263303431363630663835326466346466333432363735653630653938 Dec 16 02:07:11.694000 audit: BPF prog-id=268 op=UNLOAD Dec 16 02:07:11.694000 audit[5752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5740 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263303431363630663835326466346466333432363735653630653938 Dec 16 02:07:11.694000 audit: BPF prog-id=267 op=UNLOAD Dec 16 02:07:11.694000 audit[5752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5740 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263303431363630663835326466346466333432363735653630653938 Dec 16 02:07:11.694000 audit: BPF prog-id=269 op=LOAD Dec 16 02:07:11.694000 audit[5752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5740 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:11.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263303431363630663835326466346466333432363735653630653938 Dec 16 02:07:11.770319 containerd[2088]: time="2025-12-16T02:07:11.770182460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:07:11.770319 containerd[2088]: time="2025-12-16T02:07:11.770280511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:11.770711 kubelet[3639]: E1216 02:07:11.770676 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:07:11.771227 kubelet[3639]: E1216 02:07:11.770842 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:07:11.771227 kubelet[3639]: E1216 02:07:11.771018 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5xsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9cf5fb98-qdzq2_calico-system(81a4ac18-79c3-4965-a33a-757950d44671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:11.772134 containerd[2088]: time="2025-12-16T02:07:11.772081962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:07:11.773139 kubelet[3639]: E1216 02:07:11.773109 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:07:11.818444 containerd[2088]: time="2025-12-16T02:07:11.818402228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dpg5v,Uid:ef9d7f1f-9be7-4bfe-b6f9-f394238ebde1,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171\"" Dec 16 02:07:11.869565 containerd[2088]: time="2025-12-16T02:07:11.869520865Z" level=info msg="CreateContainer within sandbox \"bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:07:12.212359 containerd[2088]: time="2025-12-16T02:07:12.212315872Z" level=info msg="Container 9b3c44be2cea2343ac340668711bdd0bc46531b9223d58f2ec670aaa39fc40cf: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:07:12.224756 containerd[2088]: time="2025-12-16T02:07:12.224710683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:12.376971 containerd[2088]: time="2025-12-16T02:07:12.376852933Z" level=info msg="CreateContainer within sandbox \"62cfdf537e54d129f06f72865de2b2fb5e2818630acbba7af8ad2fbe4a376c83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b3c44be2cea2343ac340668711bdd0bc46531b9223d58f2ec670aaa39fc40cf\"" Dec 16 02:07:12.380319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895391301.mount: Deactivated successfully. Dec 16 02:07:12.380530 containerd[2088]: time="2025-12-16T02:07:12.380433065Z" level=info msg="StartContainer for \"9b3c44be2cea2343ac340668711bdd0bc46531b9223d58f2ec670aaa39fc40cf\"" Dec 16 02:07:12.384637 containerd[2088]: time="2025-12-16T02:07:12.384608953Z" level=info msg="Container 24e4d7b7aa5ae514bc7b1d2dd2773ed8f513353785ec843612df1b3de7d77307: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:07:12.385342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2118552261.mount: Deactivated successfully. Dec 16 02:07:12.386838 containerd[2088]: time="2025-12-16T02:07:12.386815993Z" level=info msg="connecting to shim 9b3c44be2cea2343ac340668711bdd0bc46531b9223d58f2ec670aaa39fc40cf" address="unix:///run/containerd/s/f948d105114d0e2590d61458c8555460e6b117cc566cad4622c9b481ceb81b68" protocol=ttrpc version=3 Dec 16 02:07:12.400932 systemd[1]: Started cri-containerd-9b3c44be2cea2343ac340668711bdd0bc46531b9223d58f2ec670aaa39fc40cf.scope - libcontainer container 9b3c44be2cea2343ac340668711bdd0bc46531b9223d58f2ec670aaa39fc40cf. Dec 16 02:07:12.408000 audit: BPF prog-id=270 op=LOAD Dec 16 02:07:12.408000 audit: BPF prog-id=271 op=LOAD Dec 16 02:07:12.408000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5688 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962336334346265326365613233343361633334303636383731316264 Dec 16 02:07:12.408000 audit: BPF prog-id=271 op=UNLOAD Dec 16 02:07:12.408000 audit[5781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5688 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962336334346265326365613233343361633334303636383731316264 Dec 16 02:07:12.408000 audit: BPF prog-id=272 op=LOAD Dec 16 02:07:12.408000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5688 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962336334346265326365613233343361633334303636383731316264 Dec 16 02:07:12.408000 audit: BPF prog-id=273 op=LOAD Dec 16 02:07:12.408000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5688 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962336334346265326365613233343361633334303636383731316264 Dec 16 02:07:12.408000 audit: BPF prog-id=273 op=UNLOAD Dec 16 02:07:12.408000 audit[5781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5688 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962336334346265326365613233343361633334303636383731316264 Dec 16 02:07:12.408000 audit: BPF prog-id=272 op=UNLOAD Dec 16 02:07:12.408000 audit[5781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5688 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962336334346265326365613233343361633334303636383731316264 Dec 16 02:07:12.408000 audit: BPF prog-id=274 op=LOAD Dec 16 02:07:12.408000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5688 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962336334346265326365613233343361633334303636383731316264 Dec 16 02:07:12.419332 containerd[2088]: time="2025-12-16T02:07:12.418888067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:07:12.419332 containerd[2088]: time="2025-12-16T02:07:12.419223782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:12.419591 kubelet[3639]: E1216 02:07:12.419547 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:07:12.420155 kubelet[3639]: E1216 02:07:12.419668 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:07:12.420155 kubelet[3639]: E1216 02:07:12.419851 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw58z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:12.420406 containerd[2088]: time="2025-12-16T02:07:12.420391236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:07:12.421858 kubelet[3639]: E1216 02:07:12.421671 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:07:12.512590 containerd[2088]: time="2025-12-16T02:07:12.512558144Z" level=info msg="StartContainer for \"9b3c44be2cea2343ac340668711bdd0bc46531b9223d58f2ec670aaa39fc40cf\" returns successfully" Dec 16 02:07:12.520574 kubelet[3639]: E1216 02:07:12.520483 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:07:12.551130 kubelet[3639]: I1216 02:07:12.550988 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vsht2" podStartSLOduration=70.550970896 podStartE2EDuration="1m10.550970896s" podCreationTimestamp="2025-12-16 02:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:07:12.533484232 +0000 UTC m=+75.431138008" watchObservedRunningTime="2025-12-16 02:07:12.550970896 +0000 UTC m=+75.448624672" Dec 16 02:07:12.557000 audit[5815]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:12.557000 audit[5815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffef329780 a2=0 a3=1 items=0 ppid=3793 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.557000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:12.560000 audit[5815]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=5815 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:12.560000 audit[5815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffef329780 a2=0 a3=1 items=0 ppid=3793 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.560000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:12.588000 audit[5817]: NETFILTER_CFG table=filter:141 family=2 entries=17 op=nft_register_rule pid=5817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:12.588000 audit[5817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffffeb90f0 a2=0 a3=1 items=0 ppid=3793 pid=5817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:12.595000 audit[5817]: NETFILTER_CFG table=nat:142 family=2 entries=35 op=nft_register_chain pid=5817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:12.595000 audit[5817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffffeb90f0 a2=0 a3=1 items=0 ppid=3793 pid=5817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:12.611254 containerd[2088]: time="2025-12-16T02:07:12.611218751Z" level=info msg="CreateContainer within sandbox \"bc041660f852df4df342675e60e9862e4b809ef9787941b9f34320c30b179171\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24e4d7b7aa5ae514bc7b1d2dd2773ed8f513353785ec843612df1b3de7d77307\"" Dec 16 02:07:12.612127 containerd[2088]: time="2025-12-16T02:07:12.612042770Z" level=info msg="StartContainer for \"24e4d7b7aa5ae514bc7b1d2dd2773ed8f513353785ec843612df1b3de7d77307\"" Dec 16 02:07:12.613101 containerd[2088]: time="2025-12-16T02:07:12.613078555Z" level=info msg="connecting to shim 24e4d7b7aa5ae514bc7b1d2dd2773ed8f513353785ec843612df1b3de7d77307" address="unix:///run/containerd/s/89cff75abbfb1fff56c6d757729b7d6130c0977d8cf8f6b7bb434678dcd2b525" protocol=ttrpc version=3 Dec 16 02:07:12.633965 systemd[1]: Started cri-containerd-24e4d7b7aa5ae514bc7b1d2dd2773ed8f513353785ec843612df1b3de7d77307.scope - libcontainer container 24e4d7b7aa5ae514bc7b1d2dd2773ed8f513353785ec843612df1b3de7d77307. Dec 16 02:07:12.645000 audit: BPF prog-id=275 op=LOAD Dec 16 02:07:12.646000 audit: BPF prog-id=276 op=LOAD Dec 16 02:07:12.646000 audit[5818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5740 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653464376237616135616535313462633762316432646432373733 Dec 16 02:07:12.646000 audit: BPF prog-id=276 op=UNLOAD Dec 16 02:07:12.646000 audit[5818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5740 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653464376237616135616535313462633762316432646432373733 Dec 16 02:07:12.646000 audit: BPF prog-id=277 op=LOAD Dec 16 02:07:12.646000 audit[5818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5740 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653464376237616135616535313462633762316432646432373733 Dec 16 02:07:12.646000 audit: BPF prog-id=278 op=LOAD Dec 16 02:07:12.646000 audit[5818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5740 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653464376237616135616535313462633762316432646432373733 Dec 16 02:07:12.646000 audit: BPF prog-id=278 op=UNLOAD Dec 16 02:07:12.646000 audit[5818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5740 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653464376237616135616535313462633762316432646432373733 Dec 16 02:07:12.646000 audit: BPF prog-id=277 op=UNLOAD Dec 16 02:07:12.646000 audit[5818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5740 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653464376237616135616535313462633762316432646432373733 Dec 16 02:07:12.646000 audit: BPF prog-id=279 op=LOAD Dec 16 02:07:12.646000 audit[5818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5740 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:12.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653464376237616135616535313462633762316432646432373733 Dec 16 02:07:12.671062 containerd[2088]: time="2025-12-16T02:07:12.671028023Z" level=info msg="StartContainer for \"24e4d7b7aa5ae514bc7b1d2dd2773ed8f513353785ec843612df1b3de7d77307\" returns successfully" Dec 16 02:07:12.914018 containerd[2088]: time="2025-12-16T02:07:12.913897461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:12.922463 containerd[2088]: time="2025-12-16T02:07:12.922417034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:07:12.922663 containerd[2088]: time="2025-12-16T02:07:12.922473180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:12.922829 kubelet[3639]: E1216 02:07:12.922781 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:12.923525 kubelet[3639]: E1216 02:07:12.922845 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:12.923525 kubelet[3639]: E1216 02:07:12.923437 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl6qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69677bd74f-gv2ff_calico-apiserver(e320a820-257c-48b1-85de-c1dd7b465c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:12.923611 containerd[2088]: time="2025-12-16T02:07:12.923120249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:07:12.925061 kubelet[3639]: E1216 02:07:12.925031 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:07:13.240058 containerd[2088]: time="2025-12-16T02:07:13.239937092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:13.244255 containerd[2088]: time="2025-12-16T02:07:13.244215807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:07:13.244339 containerd[2088]: time="2025-12-16T02:07:13.244307010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:13.244557 kubelet[3639]: E1216 02:07:13.244499 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:07:13.244766 kubelet[3639]: E1216 02:07:13.244644 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:07:13.245155 kubelet[3639]: E1216 02:07:13.244891 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2b6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549bfc7bd9-vst6l_calico-system(222ba326-59d2-4676-b30c-82b655f93a5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:13.246335 kubelet[3639]: E1216 02:07:13.246303 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:07:13.522947 kubelet[3639]: E1216 02:07:13.522914 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:07:13.523657 kubelet[3639]: E1216 02:07:13.523612 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:07:13.566000 audit[5852]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:13.566000 audit[5852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff8a118b0 a2=0 a3=1 items=0 ppid=3793 pid=5852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:13.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:13.573000 audit[5852]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:13.573000 audit[5852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff8a118b0 a2=0 a3=1 items=0 ppid=3793 pid=5852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:13.573000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:14.541205 kubelet[3639]: I1216 02:07:14.540770 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dpg5v" podStartSLOduration=72.540756391 podStartE2EDuration="1m12.540756391s" podCreationTimestamp="2025-12-16 02:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:07:13.57338865 +0000 UTC m=+76.471042434" watchObservedRunningTime="2025-12-16 02:07:14.540756391 +0000 UTC m=+77.438410167" Dec 16 02:07:14.564000 audit[5856]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:14.569195 kernel: kauditd_printk_skb: 180 callbacks suppressed Dec 16 02:07:14.569244 kernel: audit: type=1325 audit(1765850834.564:773): table=filter:145 family=2 entries=14 op=nft_register_rule pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:14.564000 audit[5856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffef9cb9f0 a2=0 a3=1 items=0 ppid=3793 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:14.594728 kernel: audit: type=1300 audit(1765850834.564:773): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffef9cb9f0 a2=0 a3=1 items=0 ppid=3793 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:14.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:14.604104 kernel: audit: type=1327 audit(1765850834.564:773): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:14.606000 audit[5856]: NETFILTER_CFG table=nat:146 family=2 entries=56 op=nft_register_chain pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:14.606000 audit[5856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffef9cb9f0 a2=0 a3=1 items=0 ppid=3793 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:14.634975 kernel: audit: type=1325 audit(1765850834.606:774): table=nat:146 family=2 entries=56 op=nft_register_chain pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:07:14.635058 kernel: audit: type=1300 audit(1765850834.606:774): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffef9cb9f0 a2=0 a3=1 items=0 ppid=3793 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:14.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:14.643868 kernel: audit: type=1327 audit(1765850834.606:774): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:07:18.291012 containerd[2088]: time="2025-12-16T02:07:18.290973362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:07:18.562626 containerd[2088]: time="2025-12-16T02:07:18.562476640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:18.567010 containerd[2088]: time="2025-12-16T02:07:18.566950961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:07:18.567066 containerd[2088]: time="2025-12-16T02:07:18.567038764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:18.567441 kubelet[3639]: E1216 02:07:18.567197 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:07:18.567441 kubelet[3639]: E1216 02:07:18.567274 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:07:18.567441 kubelet[3639]: E1216 02:07:18.567395 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m25gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s52v5_calico-system(f469a490-fb18-4868-b583-cd075b9a892c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:18.568824 kubelet[3639]: E1216 02:07:18.568793 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:07:22.291953 containerd[2088]: time="2025-12-16T02:07:22.291913963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:07:23.292124 kubelet[3639]: E1216 02:07:23.292064 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:07:25.197830 containerd[2088]: time="2025-12-16T02:07:25.197621605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:25.623745 containerd[2088]: time="2025-12-16T02:07:25.623591328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:07:25.623745 containerd[2088]: time="2025-12-16T02:07:25.623704580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:25.624130 kubelet[3639]: E1216 02:07:25.624073 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:25.624130 kubelet[3639]: E1216 02:07:25.624127 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:25.624565 containerd[2088]: time="2025-12-16T02:07:25.624371154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:07:25.624847 kubelet[3639]: E1216 02:07:25.624687 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7pm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69677bd74f-b42tq_calico-apiserver(5f7514dc-37b5-4bac-8d4e-04c87fb3f679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:25.625972 kubelet[3639]: E1216 02:07:25.625934 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:07:25.909367 containerd[2088]: time="2025-12-16T02:07:25.909238692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:25.919999 containerd[2088]: time="2025-12-16T02:07:25.919951209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:07:25.920094 containerd[2088]: time="2025-12-16T02:07:25.920038555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:25.920258 kubelet[3639]: E1216 02:07:25.920214 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:25.920313 kubelet[3639]: E1216 02:07:25.920262 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:25.921321 kubelet[3639]: E1216 02:07:25.920375 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl6qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69677bd74f-gv2ff_calico-apiserver(e320a820-257c-48b1-85de-c1dd7b465c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:25.921952 kubelet[3639]: E1216 02:07:25.921926 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:07:26.292873 containerd[2088]: time="2025-12-16T02:07:26.292709567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:07:26.548973 containerd[2088]: time="2025-12-16T02:07:26.548851266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:26.552982 containerd[2088]: time="2025-12-16T02:07:26.552941256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:07:26.553215 containerd[2088]: time="2025-12-16T02:07:26.553007410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:26.553358 kubelet[3639]: E1216 02:07:26.553263 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:07:26.553528 kubelet[3639]: E1216 02:07:26.553303 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:07:26.553772 kubelet[3639]: E1216 02:07:26.553542 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2b6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549bfc7bd9-vst6l_calico-system(222ba326-59d2-4676-b30c-82b655f93a5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:26.554986 containerd[2088]: time="2025-12-16T02:07:26.554797052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:07:26.555088 kubelet[3639]: E1216 02:07:26.554641 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:07:26.807775 containerd[2088]: time="2025-12-16T02:07:26.807661132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:26.811933 containerd[2088]: time="2025-12-16T02:07:26.811823852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:07:26.811933 containerd[2088]: time="2025-12-16T02:07:26.811901422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:26.812123 kubelet[3639]: E1216 02:07:26.812057 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:07:26.812123 kubelet[3639]: E1216 02:07:26.812101 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:07:26.812515 kubelet[3639]: E1216 02:07:26.812224 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw58z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:26.814405 containerd[2088]: time="2025-12-16T02:07:26.814382615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:07:27.110172 containerd[2088]: time="2025-12-16T02:07:27.109935381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:27.113762 containerd[2088]: time="2025-12-16T02:07:27.113637037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:07:27.113762 containerd[2088]: time="2025-12-16T02:07:27.113718064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:27.113892 kubelet[3639]: E1216 02:07:27.113863 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:07:27.113927 kubelet[3639]: E1216 02:07:27.113907 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:07:27.114059 kubelet[3639]: E1216 02:07:27.114001 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw58z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:27.115168 kubelet[3639]: E1216 02:07:27.115135 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:07:33.290506 kubelet[3639]: E1216 02:07:33.290424 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:07:37.293456 containerd[2088]: time="2025-12-16T02:07:37.293270276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:07:37.556323 containerd[2088]: time="2025-12-16T02:07:37.555943093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:37.562716 containerd[2088]: time="2025-12-16T02:07:37.562681159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:07:37.562815 containerd[2088]: time="2025-12-16T02:07:37.562758418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:37.563140 kubelet[3639]: E1216 02:07:37.562935 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:07:37.563140 kubelet[3639]: E1216 02:07:37.562988 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:07:37.563140 kubelet[3639]: E1216 02:07:37.563098 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5fe0b406afc34a4baf35acad091248e2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q5xsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9cf5fb98-qdzq2_calico-system(81a4ac18-79c3-4965-a33a-757950d44671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:37.565507 containerd[2088]: time="2025-12-16T02:07:37.565282004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:07:37.873680 containerd[2088]: time="2025-12-16T02:07:37.873185345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:37.880419 containerd[2088]: time="2025-12-16T02:07:37.880269839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:07:37.880419 containerd[2088]: time="2025-12-16T02:07:37.880379459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:37.880717 kubelet[3639]: E1216 02:07:37.880665 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:07:37.880769 kubelet[3639]: E1216 02:07:37.880725 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:07:37.880989 kubelet[3639]: E1216 02:07:37.880955 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5xsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9cf5fb98-qdzq2_calico-system(81a4ac18-79c3-4965-a33a-757950d44671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:37.882163 kubelet[3639]: E1216 02:07:37.882121 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:07:38.290594 kubelet[3639]: E1216 02:07:38.290494 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:07:38.290594 kubelet[3639]: E1216 02:07:38.290558 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:07:41.292809 kubelet[3639]: E1216 02:07:41.290994 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:07:41.292809 kubelet[3639]: E1216 02:07:41.291461 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:07:45.291882 containerd[2088]: time="2025-12-16T02:07:45.291835248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:07:45.576102 containerd[2088]: time="2025-12-16T02:07:45.575632333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:45.583237 containerd[2088]: time="2025-12-16T02:07:45.583130656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:07:45.583237 containerd[2088]: time="2025-12-16T02:07:45.583179394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:45.583397 kubelet[3639]: E1216 02:07:45.583360 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:07:45.583675 kubelet[3639]: E1216 02:07:45.583445 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:07:45.583675 kubelet[3639]: E1216 02:07:45.583579 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m25gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s52v5_calico-system(f469a490-fb18-4868-b583-cd075b9a892c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:45.584987 kubelet[3639]: E1216 02:07:45.584941 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:07:47.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.37:22-10.200.16.10:54120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:47.550429 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:54120.service - OpenSSH per-connection server daemon (10.200.16.10:54120). Dec 16 02:07:47.571193 kernel: audit: type=1130 audit(1765850867.549:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.37:22-10.200.16.10:54120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:47.989294 sshd[5913]: Accepted publickey for core from 10.200.16.10 port 54120 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:07:47.988000 audit[5913]: USER_ACCT pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.027041 sshd-session[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:07:48.022000 audit[5913]: CRED_ACQ pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.051331 kernel: audit: type=1101 audit(1765850867.988:776): pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.051416 kernel: audit: type=1103 audit(1765850868.022:777): pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.059358 systemd-logind[2060]: New session 11 of user core. Dec 16 02:07:48.072939 kernel: audit: type=1006 audit(1765850868.022:778): pid=5913 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 02:07:48.074973 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 02:07:48.022000 audit[5913]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0e65280 a2=3 a3=0 items=0 ppid=1 pid=5913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:48.096699 kernel: audit: type=1300 audit(1765850868.022:778): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0e65280 a2=3 a3=0 items=0 ppid=1 pid=5913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:48.022000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:07:48.078000 audit[5913]: USER_START pid=5913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.128632 kernel: audit: type=1327 audit(1765850868.022:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:07:48.128722 kernel: audit: type=1105 audit(1765850868.078:779): pid=5913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.097000 audit[5917]: CRED_ACQ pid=5917 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.144726 kernel: audit: type=1103 audit(1765850868.097:780): pid=5917 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.340821 sshd[5917]: Connection closed by 10.200.16.10 port 54120 Dec 16 02:07:48.341441 sshd-session[5913]: pam_unix(sshd:session): session closed for user core Dec 16 02:07:48.341000 audit[5913]: USER_END pid=5913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.346515 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:54120.service: Deactivated successfully. Dec 16 02:07:48.348456 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 02:07:48.350168 systemd-logind[2060]: Session 11 logged out. Waiting for processes to exit. Dec 16 02:07:48.351900 systemd-logind[2060]: Removed session 11. Dec 16 02:07:48.342000 audit[5913]: CRED_DISP pid=5913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.376406 kernel: audit: type=1106 audit(1765850868.341:781): pid=5913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.376492 kernel: audit: type=1104 audit(1765850868.342:782): pid=5913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:48.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.37:22-10.200.16.10:54120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:49.296872 containerd[2088]: time="2025-12-16T02:07:49.295865476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:07:49.298014 kubelet[3639]: E1216 02:07:49.297971 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:07:51.516739 containerd[2088]: time="2025-12-16T02:07:51.516627447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:51.520968 containerd[2088]: time="2025-12-16T02:07:51.520919426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:07:51.521053 containerd[2088]: time="2025-12-16T02:07:51.521025238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:51.521354 kubelet[3639]: E1216 02:07:51.521179 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:07:51.521354 kubelet[3639]: E1216 02:07:51.521228 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:07:51.522308 kubelet[3639]: E1216 02:07:51.521446 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw58z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:51.522389 containerd[2088]: time="2025-12-16T02:07:51.521638010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:07:51.777369 containerd[2088]: time="2025-12-16T02:07:51.777109185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:51.780929 containerd[2088]: time="2025-12-16T02:07:51.780782240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:07:51.780929 containerd[2088]: time="2025-12-16T02:07:51.780803817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:51.781460 kubelet[3639]: E1216 02:07:51.781314 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:51.781635 kubelet[3639]: E1216 02:07:51.781553 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:51.781881 kubelet[3639]: E1216 02:07:51.781753 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl6qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69677bd74f-gv2ff_calico-apiserver(e320a820-257c-48b1-85de-c1dd7b465c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:51.783072 kubelet[3639]: E1216 02:07:51.782844 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:07:51.783284 containerd[2088]: time="2025-12-16T02:07:51.783219607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:07:52.092593 containerd[2088]: time="2025-12-16T02:07:52.092470087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:52.096634 containerd[2088]: time="2025-12-16T02:07:52.096582221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:07:52.096829 containerd[2088]: time="2025-12-16T02:07:52.096671808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:52.096994 kubelet[3639]: E1216 02:07:52.096955 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:07:52.097064 kubelet[3639]: E1216 02:07:52.097006 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:07:52.097145 kubelet[3639]: E1216 02:07:52.097109 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw58z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:52.098452 kubelet[3639]: E1216 02:07:52.098400 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:07:53.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.37:22-10.200.16.10:34328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.435185 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:34328.service - OpenSSH per-connection server daemon (10.200.16.10:34328). Dec 16 02:07:53.438941 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:07:53.439135 kernel: audit: type=1130 audit(1765850873.434:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.37:22-10.200.16.10:34328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.842000 audit[5940]: USER_ACCT pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:53.860508 sshd[5940]: Accepted publickey for core from 10.200.16.10 port 34328 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:07:53.861139 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:07:53.858000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:53.863894 kernel: audit: type=1101 audit(1765850873.842:785): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:53.883195 systemd-logind[2060]: New session 12 of user core. Dec 16 02:07:53.893813 kernel: audit: type=1103 audit(1765850873.858:786): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:53.893903 kernel: audit: type=1006 audit(1765850873.858:787): pid=5940 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 02:07:53.858000 audit[5940]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd12f4f0 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:53.910488 kernel: audit: type=1300 audit(1765850873.858:787): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd12f4f0 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:53.858000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:07:53.913142 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 02:07:53.918384 kernel: audit: type=1327 audit(1765850873.858:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:07:53.919000 audit[5940]: USER_START pid=5940 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:53.941000 audit[5944]: CRED_ACQ pid=5944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:53.956871 kernel: audit: type=1105 audit(1765850873.919:788): pid=5940 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:53.957004 kernel: audit: type=1103 audit(1765850873.941:789): pid=5944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:54.176288 sshd[5944]: Connection closed by 10.200.16.10 port 34328 Dec 16 02:07:54.176191 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Dec 16 02:07:54.178000 audit[5940]: USER_END pid=5940 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:54.183401 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:34328.service: Deactivated successfully. Dec 16 02:07:54.186174 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 02:07:54.187679 systemd-logind[2060]: Session 12 logged out. Waiting for processes to exit. Dec 16 02:07:54.192485 systemd-logind[2060]: Removed session 12. Dec 16 02:07:54.178000 audit[5940]: CRED_DISP pid=5940 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:54.219869 kernel: audit: type=1106 audit(1765850874.178:790): pid=5940 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:54.219960 kernel: audit: type=1104 audit(1765850874.178:791): pid=5940 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:54.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.37:22-10.200.16.10:34328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.290067 containerd[2088]: time="2025-12-16T02:07:54.290034369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:07:54.552638 containerd[2088]: time="2025-12-16T02:07:54.552588759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:54.564493 containerd[2088]: time="2025-12-16T02:07:54.564438415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:07:54.564860 containerd[2088]: time="2025-12-16T02:07:54.564527210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:54.564936 kubelet[3639]: E1216 02:07:54.564817 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:07:54.564936 kubelet[3639]: E1216 02:07:54.564864 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:07:54.565516 kubelet[3639]: E1216 02:07:54.565079 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2b6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549bfc7bd9-vst6l_calico-system(222ba326-59d2-4676-b30c-82b655f93a5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:54.565583 containerd[2088]: time="2025-12-16T02:07:54.565272946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:07:54.567194 kubelet[3639]: E1216 02:07:54.566872 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:07:54.844127 containerd[2088]: time="2025-12-16T02:07:54.843946883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:07:54.848154 containerd[2088]: time="2025-12-16T02:07:54.848063040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:07:54.848154 containerd[2088]: time="2025-12-16T02:07:54.848100809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:07:54.848346 kubelet[3639]: E1216 02:07:54.848305 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:54.848386 kubelet[3639]: E1216 02:07:54.848356 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:07:54.848499 kubelet[3639]: E1216 02:07:54.848463 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7pm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69677bd74f-b42tq_calico-apiserver(5f7514dc-37b5-4bac-8d4e-04c87fb3f679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:07:54.849930 kubelet[3639]: E1216 02:07:54.849862 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:07:56.290464 kubelet[3639]: E1216 02:07:56.290204 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:07:59.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.37:22-10.200.16.10:34332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.263064 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:34332.service - OpenSSH per-connection server daemon (10.200.16.10:34332). Dec 16 02:07:59.267406 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:07:59.267496 kernel: audit: type=1130 audit(1765850879.262:793): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.37:22-10.200.16.10:34332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.702000 audit[5990]: USER_ACCT pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:59.703646 sshd[5990]: Accepted publickey for core from 10.200.16.10 port 34332 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:07:59.722000 audit[5990]: CRED_ACQ pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:59.724907 sshd-session[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:07:59.738113 kernel: audit: type=1101 audit(1765850879.702:794): pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:59.738185 kernel: audit: type=1103 audit(1765850879.722:795): pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:59.748450 kernel: audit: type=1006 audit(1765850879.722:796): pid=5990 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 02:07:59.722000 audit[5990]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc93c4120 a2=3 a3=0 items=0 ppid=1 pid=5990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:59.768613 kernel: audit: type=1300 audit(1765850879.722:796): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc93c4120 a2=3 a3=0 items=0 ppid=1 pid=5990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:07:59.722000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:07:59.776823 kernel: audit: type=1327 audit(1765850879.722:796): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:07:59.779976 systemd-logind[2060]: New session 13 of user core. Dec 16 02:07:59.786941 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 02:07:59.789000 audit[5990]: USER_START pid=5990 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:59.809000 audit[5994]: CRED_ACQ pid=5994 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:59.824869 kernel: audit: type=1105 audit(1765850879.789:797): pid=5990 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:07:59.824959 kernel: audit: type=1103 audit(1765850879.809:798): pid=5994 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.017521 sshd[5994]: Connection closed by 10.200.16.10 port 34332 Dec 16 02:08:00.018070 sshd-session[5990]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:00.018000 audit[5990]: USER_END pid=5990 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.022046 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:34332.service: Deactivated successfully. Dec 16 02:08:00.024765 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 02:08:00.018000 audit[5990]: CRED_DISP pid=5990 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.038703 systemd-logind[2060]: Session 13 logged out. Waiting for processes to exit. Dec 16 02:08:00.041483 systemd-logind[2060]: Removed session 13. Dec 16 02:08:00.052036 kernel: audit: type=1106 audit(1765850880.018:799): pid=5990 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.052126 kernel: audit: type=1104 audit(1765850880.018:800): pid=5990 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.37:22-10.200.16.10:34332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.104574 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:58982.service - OpenSSH per-connection server daemon (10.200.16.10:58982). Dec 16 02:08:00.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.37:22-10.200.16.10:58982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.527000 audit[6007]: USER_ACCT pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.528614 sshd[6007]: Accepted publickey for core from 10.200.16.10 port 58982 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:00.529000 audit[6007]: CRED_ACQ pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.529000 audit[6007]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5e1e090 a2=3 a3=0 items=0 ppid=1 pid=6007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:00.529000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:00.531192 sshd-session[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:00.538745 systemd-logind[2060]: New session 14 of user core. Dec 16 02:08:00.543944 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 02:08:00.545000 audit[6007]: USER_START pid=6007 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.547000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.844772 sshd[6011]: Connection closed by 10.200.16.10 port 58982 Dec 16 02:08:00.845256 sshd-session[6007]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:00.846000 audit[6007]: USER_END pid=6007 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.846000 audit[6007]: CRED_DISP pid=6007 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:00.851204 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:58982.service: Deactivated successfully. Dec 16 02:08:00.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.37:22-10.200.16.10:58982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.853560 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 02:08:00.856594 systemd-logind[2060]: Session 14 logged out. Waiting for processes to exit. Dec 16 02:08:00.858875 systemd-logind[2060]: Removed session 14. Dec 16 02:08:00.939162 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:58984.service - OpenSSH per-connection server daemon (10.200.16.10:58984). Dec 16 02:08:00.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.37:22-10.200.16.10:58984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:01.368000 audit[6021]: USER_ACCT pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:01.369998 sshd[6021]: Accepted publickey for core from 10.200.16.10 port 58984 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:01.370000 audit[6021]: CRED_ACQ pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:01.370000 audit[6021]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd27dfa0 a2=3 a3=0 items=0 ppid=1 pid=6021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:01.370000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:01.371806 sshd-session[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:01.375598 systemd-logind[2060]: New session 15 of user core. Dec 16 02:08:01.385935 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 02:08:01.387000 audit[6021]: USER_START pid=6021 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:01.389000 audit[6025]: CRED_ACQ pid=6025 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:01.647277 sshd[6025]: Connection closed by 10.200.16.10 port 58984 Dec 16 02:08:01.647042 sshd-session[6021]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:01.648000 audit[6021]: USER_END pid=6021 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:01.648000 audit[6021]: CRED_DISP pid=6021 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:01.651499 systemd-logind[2060]: Session 15 logged out. Waiting for processes to exit. Dec 16 02:08:01.651849 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:58984.service: Deactivated successfully. Dec 16 02:08:01.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.37:22-10.200.16.10:58984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:01.653984 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 02:08:01.656432 systemd-logind[2060]: Removed session 15. Dec 16 02:08:03.292370 kubelet[3639]: E1216 02:08:03.292329 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:08:03.293375 kubelet[3639]: E1216 02:08:03.292476 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:08:05.291395 kubelet[3639]: E1216 02:08:05.290901 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:08:06.289677 kubelet[3639]: E1216 02:08:06.289613 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:08:06.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.37:22-10.200.16.10:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:06.730219 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:59000.service - OpenSSH per-connection server daemon (10.200.16.10:59000). Dec 16 02:08:06.733797 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 02:08:06.733877 kernel: audit: type=1130 audit(1765850886.729:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.37:22-10.200.16.10:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:07.227747 sshd[6042]: Accepted publickey for core from 10.200.16.10 port 59000 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:07.226000 audit[6042]: USER_ACCT pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.242000 audit[6042]: CRED_ACQ pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.244190 sshd-session[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:07.259720 kernel: audit: type=1101 audit(1765850887.226:821): pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.259805 kernel: audit: type=1103 audit(1765850887.242:822): pid=6042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.270395 kernel: audit: type=1006 audit(1765850887.242:823): pid=6042 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 02:08:07.242000 audit[6042]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdca8d190 a2=3 a3=0 items=0 ppid=1 pid=6042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:07.288990 kernel: audit: type=1300 audit(1765850887.242:823): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdca8d190 a2=3 a3=0 items=0 ppid=1 pid=6042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:07.242000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:07.296059 kubelet[3639]: E1216 02:08:07.294506 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:08:07.301808 kernel: audit: type=1327 audit(1765850887.242:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:07.302612 systemd-logind[2060]: New session 16 of user core. Dec 16 02:08:07.305949 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 02:08:07.308000 audit[6042]: USER_START pid=6042 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.311000 audit[6046]: CRED_ACQ pid=6046 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.352864 kernel: audit: type=1105 audit(1765850887.308:824): pid=6042 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.352947 kernel: audit: type=1103 audit(1765850887.311:825): pid=6046 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.522743 sshd[6046]: Connection closed by 10.200.16.10 port 59000 Dec 16 02:08:07.523996 sshd-session[6042]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:07.525000 audit[6042]: USER_END pid=6042 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.532960 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:59000.service: Deactivated successfully. Dec 16 02:08:07.536890 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 02:08:07.548697 systemd-logind[2060]: Session 16 logged out. Waiting for processes to exit. Dec 16 02:08:07.529000 audit[6042]: CRED_DISP pid=6042 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.563444 kernel: audit: type=1106 audit(1765850887.525:826): pid=6042 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.563547 kernel: audit: type=1104 audit(1765850887.529:827): pid=6042 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:07.564930 systemd-logind[2060]: Removed session 16. Dec 16 02:08:07.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.37:22-10.200.16.10:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.290467 kubelet[3639]: E1216 02:08:08.290426 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:08:12.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.37:22-10.200.16.10:43020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.612041 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:43020.service - OpenSSH per-connection server daemon (10.200.16.10:43020). Dec 16 02:08:12.615804 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:08:12.615895 kernel: audit: type=1130 audit(1765850892.610:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.37:22-10.200.16.10:43020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:13.046000 audit[6059]: USER_ACCT pid=6059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.066283 sshd[6059]: Accepted publickey for core from 10.200.16.10 port 43020 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:13.066476 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:13.064000 audit[6059]: CRED_ACQ pid=6059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.067850 kernel: audit: type=1101 audit(1765850893.046:830): pid=6059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.093621 kernel: audit: type=1103 audit(1765850893.064:831): pid=6059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.093707 kernel: audit: type=1006 audit(1765850893.064:832): pid=6059 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 02:08:13.087816 systemd-logind[2060]: New session 17 of user core. Dec 16 02:08:13.064000 audit[6059]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca687e70 a2=3 a3=0 items=0 ppid=1 pid=6059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:13.111497 kernel: audit: type=1300 audit(1765850893.064:832): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca687e70 a2=3 a3=0 items=0 ppid=1 pid=6059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:13.064000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:13.117349 kernel: audit: type=1327 audit(1765850893.064:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:13.118046 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 02:08:13.121000 audit[6059]: USER_START pid=6059 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.143907 kernel: audit: type=1105 audit(1765850893.121:833): pid=6059 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.143000 audit[6063]: CRED_ACQ pid=6063 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.159791 kernel: audit: type=1103 audit(1765850893.143:834): pid=6063 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.349896 sshd[6063]: Connection closed by 10.200.16.10 port 43020 Dec 16 02:08:13.349243 sshd-session[6059]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:13.349000 audit[6059]: USER_END pid=6059 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.355119 systemd-logind[2060]: Session 17 logged out. Waiting for processes to exit. Dec 16 02:08:13.356843 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:43020.service: Deactivated successfully. Dec 16 02:08:13.360974 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 02:08:13.363656 systemd-logind[2060]: Removed session 17. Dec 16 02:08:13.349000 audit[6059]: CRED_DISP pid=6059 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.390528 kernel: audit: type=1106 audit(1765850893.349:835): pid=6059 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.390619 kernel: audit: type=1104 audit(1765850893.349:836): pid=6059 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:13.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.37:22-10.200.16.10:43020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:15.292166 kubelet[3639]: E1216 02:08:15.291762 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:08:17.291356 kubelet[3639]: E1216 02:08:17.290983 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:08:18.290464 containerd[2088]: time="2025-12-16T02:08:18.290266641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:08:18.443918 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:43026.service - OpenSSH per-connection server daemon (10.200.16.10:43026). Dec 16 02:08:18.461360 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:08:18.461386 kernel: audit: type=1130 audit(1765850898.443:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.37:22-10.200.16.10:43026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:18.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.37:22-10.200.16.10:43026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:18.558117 containerd[2088]: time="2025-12-16T02:08:18.557778380Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:08:18.562094 containerd[2088]: time="2025-12-16T02:08:18.562051646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:08:18.562197 containerd[2088]: time="2025-12-16T02:08:18.562145457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:08:18.562407 kubelet[3639]: E1216 02:08:18.562355 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:08:18.562830 kubelet[3639]: E1216 02:08:18.562413 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:08:18.562830 kubelet[3639]: E1216 02:08:18.562521 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5fe0b406afc34a4baf35acad091248e2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q5xsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9cf5fb98-qdzq2_calico-system(81a4ac18-79c3-4965-a33a-757950d44671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:08:18.565905 containerd[2088]: time="2025-12-16T02:08:18.565869690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:08:18.857558 containerd[2088]: time="2025-12-16T02:08:18.857232665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:08:18.861791 containerd[2088]: time="2025-12-16T02:08:18.861677065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:08:18.861791 containerd[2088]: time="2025-12-16T02:08:18.861728739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:08:18.862165 kubelet[3639]: E1216 02:08:18.862107 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:08:18.862165 kubelet[3639]: E1216 02:08:18.862164 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:08:18.862370 kubelet[3639]: E1216 02:08:18.862279 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5xsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9cf5fb98-qdzq2_calico-system(81a4ac18-79c3-4965-a33a-757950d44671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:08:18.863651 kubelet[3639]: E1216 02:08:18.863583 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:08:18.887000 audit[6080]: USER_ACCT pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:18.888814 sshd[6080]: Accepted publickey for core from 10.200.16.10 port 43026 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:18.906887 kernel: audit: type=1101 audit(1765850898.887:839): pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:18.906000 audit[6080]: CRED_ACQ pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:18.907610 sshd-session[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:18.932182 kernel: audit: type=1103 audit(1765850898.906:840): pid=6080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:18.932292 kernel: audit: type=1006 audit(1765850898.906:841): pid=6080 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 02:08:18.906000 audit[6080]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb1a4ce0 a2=3 a3=0 items=0 ppid=1 pid=6080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:18.937889 systemd-logind[2060]: New session 18 of user core. Dec 16 02:08:18.951138 kernel: audit: type=1300 audit(1765850898.906:841): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb1a4ce0 a2=3 a3=0 items=0 ppid=1 pid=6080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:18.906000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:18.959396 kernel: audit: type=1327 audit(1765850898.906:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:18.962248 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 02:08:18.965000 audit[6080]: USER_START pid=6080 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:18.969000 audit[6084]: CRED_ACQ pid=6084 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.001332 kernel: audit: type=1105 audit(1765850898.965:842): pid=6080 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.001765 kernel: audit: type=1103 audit(1765850898.969:843): pid=6084 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.213940 sshd[6084]: Connection closed by 10.200.16.10 port 43026 Dec 16 02:08:19.215495 sshd-session[6080]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:19.215000 audit[6080]: USER_END pid=6080 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.219921 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:43026.service: Deactivated successfully. Dec 16 02:08:19.224212 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 02:08:19.225861 systemd-logind[2060]: Session 18 logged out. Waiting for processes to exit. Dec 16 02:08:19.227608 systemd-logind[2060]: Removed session 18. Dec 16 02:08:19.216000 audit[6080]: CRED_DISP pid=6080 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.252811 kernel: audit: type=1106 audit(1765850899.215:844): pid=6080 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.252912 kernel: audit: type=1104 audit(1765850899.216:845): pid=6080 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.37:22-10.200.16.10:43026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:19.291812 kubelet[3639]: E1216 02:08:19.291761 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:08:19.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.37:22-10.200.16.10:43042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:19.304032 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:43042.service - OpenSSH per-connection server daemon (10.200.16.10:43042). Dec 16 02:08:19.722000 audit[6096]: USER_ACCT pid=6096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.725921 sshd[6096]: Accepted publickey for core from 10.200.16.10 port 43042 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:19.725000 audit[6096]: CRED_ACQ pid=6096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.725000 audit[6096]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd66f7b60 a2=3 a3=0 items=0 ppid=1 pid=6096 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:19.725000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:19.727464 sshd-session[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:19.732107 systemd-logind[2060]: New session 19 of user core. Dec 16 02:08:19.737991 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 02:08:19.740000 audit[6096]: USER_START pid=6096 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:19.742000 audit[6100]: CRED_ACQ pid=6100 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:20.468959 sshd[6100]: Connection closed by 10.200.16.10 port 43042 Dec 16 02:08:20.469253 sshd-session[6096]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:20.470000 audit[6096]: USER_END pid=6096 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:20.470000 audit[6096]: CRED_DISP pid=6096 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:20.475395 systemd-logind[2060]: Session 19 logged out. Waiting for processes to exit. Dec 16 02:08:20.476210 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:43042.service: Deactivated successfully. Dec 16 02:08:20.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.37:22-10.200.16.10:43042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:20.479352 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 02:08:20.486000 systemd-logind[2060]: Removed session 19. Dec 16 02:08:20.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.37:22-10.200.16.10:55602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:20.560399 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:55602.service - OpenSSH per-connection server daemon (10.200.16.10:55602). Dec 16 02:08:20.952000 audit[6110]: USER_ACCT pid=6110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:20.953984 sshd[6110]: Accepted publickey for core from 10.200.16.10 port 55602 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:20.954000 audit[6110]: CRED_ACQ pid=6110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:20.954000 audit[6110]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8e63f10 a2=3 a3=0 items=0 ppid=1 pid=6110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:20.954000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:20.955714 sshd-session[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:20.959868 systemd-logind[2060]: New session 20 of user core. Dec 16 02:08:20.968002 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 02:08:20.971000 audit[6110]: USER_START pid=6110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:20.972000 audit[6114]: CRED_ACQ pid=6114 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:21.296067 kubelet[3639]: E1216 02:08:21.295999 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:08:21.296645 kubelet[3639]: E1216 02:08:21.296497 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:08:21.886000 audit[6128]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=6128 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:08:21.886000 audit[6128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc64da670 a2=0 a3=1 items=0 ppid=3793 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:21.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:08:21.890000 audit[6128]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6128 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:08:21.890000 audit[6128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc64da670 a2=0 a3=1 items=0 ppid=3793 pid=6128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:21.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:08:21.986000 audit[6130]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=6130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:08:21.986000 audit[6130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff3283560 a2=0 a3=1 items=0 ppid=3793 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:21.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:08:21.992000 audit[6130]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=6130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:08:21.992000 audit[6130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff3283560 a2=0 a3=1 items=0 ppid=3793 pid=6130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:21.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:08:22.106617 sshd[6114]: Connection closed by 10.200.16.10 port 55602 Dec 16 02:08:22.107195 sshd-session[6110]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:22.108000 audit[6110]: USER_END pid=6110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:22.108000 audit[6110]: CRED_DISP pid=6110 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:22.111429 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:55602.service: Deactivated successfully. Dec 16 02:08:22.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.37:22-10.200.16.10:55602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.114261 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 02:08:22.115904 systemd-logind[2060]: Session 20 logged out. Waiting for processes to exit. Dec 16 02:08:22.116993 systemd-logind[2060]: Removed session 20. Dec 16 02:08:22.190651 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:55618.service - OpenSSH per-connection server daemon (10.200.16.10:55618). Dec 16 02:08:22.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.37:22-10.200.16.10:55618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.590000 audit[6135]: USER_ACCT pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:22.591722 sshd[6135]: Accepted publickey for core from 10.200.16.10 port 55618 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:22.594140 sshd-session[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:22.592000 audit[6135]: CRED_ACQ pid=6135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:22.592000 audit[6135]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdfd37020 a2=3 a3=0 items=0 ppid=1 pid=6135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:22.592000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:22.602345 systemd-logind[2060]: New session 21 of user core. Dec 16 02:08:22.607035 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 02:08:22.610000 audit[6135]: USER_START pid=6135 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:22.613000 audit[6139]: CRED_ACQ pid=6139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:23.004935 sshd[6139]: Connection closed by 10.200.16.10 port 55618 Dec 16 02:08:23.004885 sshd-session[6135]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:23.006000 audit[6135]: USER_END pid=6135 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:23.006000 audit[6135]: CRED_DISP pid=6135 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:23.009418 systemd-logind[2060]: Session 21 logged out. Waiting for processes to exit. Dec 16 02:08:23.009683 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:55618.service: Deactivated successfully. Dec 16 02:08:23.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.37:22-10.200.16.10:55618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:23.011458 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 02:08:23.014342 systemd-logind[2060]: Removed session 21. Dec 16 02:08:23.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.37:22-10.200.16.10:55628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:23.096777 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:55628.service - OpenSSH per-connection server daemon (10.200.16.10:55628). Dec 16 02:08:23.546955 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 16 02:08:24.422848 kernel: audit: type=1101 audit(1765850903.526:879): pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:24.422919 kernel: audit: type=1103 audit(1765850903.547:880): pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:24.422937 kernel: audit: type=1006 audit(1765850903.547:881): pid=6151 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 02:08:24.422954 kernel: audit: type=1300 audit(1765850903.547:881): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd07cde40 a2=3 a3=0 items=0 ppid=1 pid=6151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.422969 kernel: audit: type=1327 audit(1765850903.547:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:23.526000 audit[6151]: USER_ACCT pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:23.547000 audit[6151]: CRED_ACQ pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:23.547000 audit[6151]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd07cde40 a2=3 a3=0 items=0 ppid=1 pid=6151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:23.547000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:24.423217 sshd[6151]: Accepted publickey for core from 10.200.16.10 port 55628 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:23.578310 systemd-logind[2060]: New session 22 of user core. Dec 16 02:08:24.468894 kernel: audit: type=1105 audit(1765850904.422:882): pid=6151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:24.468975 kernel: audit: type=1103 audit(1765850904.443:883): pid=6155 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:24.422000 audit[6151]: USER_START pid=6151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:24.443000 audit[6155]: CRED_ACQ pid=6155 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:23.555024 sshd-session[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:23.601076 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 02:08:24.653752 sshd[6155]: Connection closed by 10.200.16.10 port 55628 Dec 16 02:08:24.655525 sshd-session[6151]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:24.655000 audit[6151]: USER_END pid=6151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:24.660145 systemd-logind[2060]: Session 22 logged out. Waiting for processes to exit. Dec 16 02:08:24.660714 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:55628.service: Deactivated successfully. Dec 16 02:08:24.664657 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 02:08:24.668063 systemd-logind[2060]: Removed session 22. Dec 16 02:08:24.656000 audit[6151]: CRED_DISP pid=6151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:24.693753 kernel: audit: type=1106 audit(1765850904.655:884): pid=6151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:24.693910 kernel: audit: type=1104 audit(1765850904.656:885): pid=6151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:24.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.37:22-10.200.16.10:55628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.709802 kernel: audit: type=1131 audit(1765850904.660:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.37:22-10.200.16.10:55628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:27.047000 audit[6204]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=6204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:08:27.047000 audit[6204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff99608e0 a2=0 a3=1 items=0 ppid=3793 pid=6204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:08:27.053000 audit[6204]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=6204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:08:27.053000 audit[6204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff99608e0 a2=0 a3=1 items=0 ppid=3793 pid=6204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:08:27.291931 kubelet[3639]: E1216 02:08:27.291884 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:08:29.739995 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:55630.service - OpenSSH per-connection server daemon (10.200.16.10:55630). Dec 16 02:08:29.758887 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 02:08:29.759021 kernel: audit: type=1130 audit(1765850909.739:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.37:22-10.200.16.10:55630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:29.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.37:22-10.200.16.10:55630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:30.158000 audit[6208]: USER_ACCT pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.160195 sshd[6208]: Accepted publickey for core from 10.200.16.10 port 55630 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:30.175000 audit[6208]: CRED_ACQ pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.177058 sshd-session[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:30.191442 kernel: audit: type=1101 audit(1765850910.158:890): pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.191550 kernel: audit: type=1103 audit(1765850910.175:891): pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.198189 systemd-logind[2060]: New session 23 of user core. Dec 16 02:08:30.201350 kernel: audit: type=1006 audit(1765850910.175:892): pid=6208 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 02:08:30.175000 audit[6208]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffac407c0 a2=3 a3=0 items=0 ppid=1 pid=6208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:30.218553 kernel: audit: type=1300 audit(1765850910.175:892): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffac407c0 a2=3 a3=0 items=0 ppid=1 pid=6208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:30.175000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:30.225255 kernel: audit: type=1327 audit(1765850910.175:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:30.226043 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 02:08:30.229000 audit[6208]: USER_START pid=6208 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.248000 audit[6219]: CRED_ACQ pid=6219 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.263549 kernel: audit: type=1105 audit(1765850910.229:893): pid=6208 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.263655 kernel: audit: type=1103 audit(1765850910.248:894): pid=6219 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.438110 sshd[6219]: Connection closed by 10.200.16.10 port 55630 Dec 16 02:08:30.438724 sshd-session[6208]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:30.439000 audit[6208]: USER_END pid=6208 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.443135 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:55630.service: Deactivated successfully. Dec 16 02:08:30.445423 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 02:08:30.446492 systemd-logind[2060]: Session 23 logged out. Waiting for processes to exit. Dec 16 02:08:30.448414 systemd-logind[2060]: Removed session 23. Dec 16 02:08:30.439000 audit[6208]: CRED_DISP pid=6208 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.474319 kernel: audit: type=1106 audit(1765850910.439:895): pid=6208 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.474513 kernel: audit: type=1104 audit(1765850910.439:896): pid=6208 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:30.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.37:22-10.200.16.10:55630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:31.293953 kubelet[3639]: E1216 02:08:31.293645 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:08:34.291122 containerd[2088]: time="2025-12-16T02:08:34.290807170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:08:34.291498 kubelet[3639]: E1216 02:08:34.290877 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:08:34.291498 kubelet[3639]: E1216 02:08:34.291161 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:08:34.549826 containerd[2088]: time="2025-12-16T02:08:34.548496342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:08:34.555995 containerd[2088]: time="2025-12-16T02:08:34.555890173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:08:34.555995 containerd[2088]: time="2025-12-16T02:08:34.555942263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:08:34.556182 kubelet[3639]: E1216 02:08:34.556140 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:08:34.556247 kubelet[3639]: E1216 02:08:34.556192 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:08:34.556334 kubelet[3639]: E1216 02:08:34.556307 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl6qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69677bd74f-gv2ff_calico-apiserver(e320a820-257c-48b1-85de-c1dd7b465c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:08:34.558069 kubelet[3639]: E1216 02:08:34.558029 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:08:35.527425 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:48390.service - OpenSSH per-connection server daemon (10.200.16.10:48390). Dec 16 02:08:35.533814 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:08:35.533916 kernel: audit: type=1130 audit(1765850915.526:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.37:22-10.200.16.10:48390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:35.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.37:22-10.200.16.10:48390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:35.963072 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 48390 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:35.962000 audit[6233]: USER_ACCT pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:35.980016 sshd-session[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:35.986148 systemd-logind[2060]: New session 24 of user core. Dec 16 02:08:35.977000 audit[6233]: CRED_ACQ pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.001563 kernel: audit: type=1101 audit(1765850915.962:899): pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.001687 kernel: audit: type=1103 audit(1765850915.977:900): pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.002845 kernel: audit: type=1006 audit(1765850915.977:901): pid=6233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 02:08:36.004054 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 02:08:35.977000 audit[6233]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6aeeca0 a2=3 a3=0 items=0 ppid=1 pid=6233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:36.028547 kernel: audit: type=1300 audit(1765850915.977:901): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6aeeca0 a2=3 a3=0 items=0 ppid=1 pid=6233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:35.977000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:36.036434 kernel: audit: type=1327 audit(1765850915.977:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:36.011000 audit[6233]: USER_START pid=6233 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.055814 kernel: audit: type=1105 audit(1765850916.011:902): pid=6233 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.036000 audit[6237]: CRED_ACQ pid=6237 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.071538 kernel: audit: type=1103 audit(1765850916.036:903): pid=6237 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.280430 sshd[6237]: Connection closed by 10.200.16.10 port 48390 Dec 16 02:08:36.280738 sshd-session[6233]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:36.282000 audit[6233]: USER_END pid=6233 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.286704 systemd-logind[2060]: Session 24 logged out. Waiting for processes to exit. Dec 16 02:08:36.288596 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:48390.service: Deactivated successfully. Dec 16 02:08:36.295336 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 02:08:36.298252 systemd-logind[2060]: Removed session 24. Dec 16 02:08:36.282000 audit[6233]: CRED_DISP pid=6233 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.305627 containerd[2088]: time="2025-12-16T02:08:36.305346220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:08:36.317383 kernel: audit: type=1106 audit(1765850916.282:904): pid=6233 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.317502 kernel: audit: type=1104 audit(1765850916.282:905): pid=6233 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:36.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.37:22-10.200.16.10:48390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:36.578196 containerd[2088]: time="2025-12-16T02:08:36.578042593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:08:36.586169 containerd[2088]: time="2025-12-16T02:08:36.586054764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:08:36.586169 containerd[2088]: time="2025-12-16T02:08:36.586100173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:08:36.586395 kubelet[3639]: E1216 02:08:36.586355 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:08:36.586755 kubelet[3639]: E1216 02:08:36.586404 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:08:36.586755 kubelet[3639]: E1216 02:08:36.586519 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m25gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s52v5_calico-system(f469a490-fb18-4868-b583-cd075b9a892c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:08:36.587969 kubelet[3639]: E1216 02:08:36.587935 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:08:41.291265 containerd[2088]: time="2025-12-16T02:08:41.291202343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:08:41.371320 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:08:41.371433 kernel: audit: type=1130 audit(1765850921.367:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.37:22-10.200.16.10:52182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:41.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.37:22-10.200.16.10:52182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:41.368305 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:52182.service - OpenSSH per-connection server daemon (10.200.16.10:52182). Dec 16 02:08:41.565778 containerd[2088]: time="2025-12-16T02:08:41.565222583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:08:41.570690 containerd[2088]: time="2025-12-16T02:08:41.569734513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:08:41.570690 containerd[2088]: time="2025-12-16T02:08:41.569760257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:08:41.571152 kubelet[3639]: E1216 02:08:41.571069 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:08:41.571551 kubelet[3639]: E1216 02:08:41.571154 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:08:41.571551 kubelet[3639]: E1216 02:08:41.571297 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw58z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:08:41.573880 containerd[2088]: time="2025-12-16T02:08:41.573149839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:08:41.806000 audit[6250]: USER_ACCT pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:41.824517 sshd-session[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:41.825115 sshd[6250]: Accepted publickey for core from 10.200.16.10 port 52182 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:41.829556 systemd-logind[2060]: New session 25 of user core. Dec 16 02:08:41.831847 kernel: audit: type=1101 audit(1765850921.806:908): pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:41.831926 kernel: audit: type=1103 audit(1765850921.822:909): pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:41.822000 audit[6250]: CRED_ACQ pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:41.855991 kernel: audit: type=1006 audit(1765850921.822:910): pid=6250 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 02:08:41.857823 kernel: audit: type=1300 audit(1765850921.822:910): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc434c10 a2=3 a3=0 items=0 ppid=1 pid=6250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:41.822000 audit[6250]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc434c10 a2=3 a3=0 items=0 ppid=1 pid=6250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:41.857033 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 02:08:41.875025 containerd[2088]: time="2025-12-16T02:08:41.874975575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:08:41.822000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:41.881388 kernel: audit: type=1327 audit(1765850921.822:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:41.860000 audit[6250]: USER_START pid=6250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:41.900097 kernel: audit: type=1105 audit(1765850921.860:911): pid=6250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:41.902059 containerd[2088]: time="2025-12-16T02:08:41.901999087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:08:41.873000 audit[6254]: CRED_ACQ pid=6254 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:41.903173 containerd[2088]: time="2025-12-16T02:08:41.902000327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:08:41.903661 kubelet[3639]: E1216 02:08:41.903034 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:08:41.903661 kubelet[3639]: E1216 02:08:41.903366 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:08:41.903661 kubelet[3639]: E1216 02:08:41.903469 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw58z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rmzsh_calico-system(8d6bb703-5160-48e3-8477-a1bbde860409): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:08:41.904777 kubelet[3639]: E1216 02:08:41.904747 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmzsh" podUID="8d6bb703-5160-48e3-8477-a1bbde860409" Dec 16 02:08:41.915504 kernel: audit: type=1103 audit(1765850921.873:912): pid=6254 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:42.111687 sshd[6254]: Connection closed by 10.200.16.10 port 52182 Dec 16 02:08:42.112228 sshd-session[6250]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:42.111000 audit[6250]: USER_END pid=6250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:42.117582 systemd-logind[2060]: Session 25 logged out. Waiting for processes to exit. Dec 16 02:08:42.118239 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:52182.service: Deactivated successfully. Dec 16 02:08:42.122272 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 02:08:42.125645 systemd-logind[2060]: Removed session 25. Dec 16 02:08:42.111000 audit[6250]: CRED_DISP pid=6250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:42.149819 kernel: audit: type=1106 audit(1765850922.111:913): pid=6250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:42.149961 kernel: audit: type=1104 audit(1765850922.111:914): pid=6250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:42.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.37:22-10.200.16.10:52182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:42.291266 containerd[2088]: time="2025-12-16T02:08:42.290839943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:08:42.549616 containerd[2088]: time="2025-12-16T02:08:42.549567481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:08:42.553856 containerd[2088]: time="2025-12-16T02:08:42.553680182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:08:42.553856 containerd[2088]: time="2025-12-16T02:08:42.553730959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:08:42.554352 kubelet[3639]: E1216 02:08:42.554310 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:08:42.554352 kubelet[3639]: E1216 02:08:42.554358 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:08:42.554352 kubelet[3639]: E1216 02:08:42.554478 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2b6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549bfc7bd9-vst6l_calico-system(222ba326-59d2-4676-b30c-82b655f93a5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:08:42.557299 kubelet[3639]: E1216 02:08:42.556441 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549bfc7bd9-vst6l" podUID="222ba326-59d2-4676-b30c-82b655f93a5f" Dec 16 02:08:47.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.37:22-10.200.16.10:52194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:47.205041 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:52194.service - OpenSSH per-connection server daemon (10.200.16.10:52194). Dec 16 02:08:47.208706 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:08:47.208777 kernel: audit: type=1130 audit(1765850927.203:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.37:22-10.200.16.10:52194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:47.295818 kubelet[3639]: E1216 02:08:47.295681 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-gv2ff" podUID="e320a820-257c-48b1-85de-c1dd7b465c9a" Dec 16 02:08:47.297944 containerd[2088]: time="2025-12-16T02:08:47.297734608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:08:47.563743 containerd[2088]: time="2025-12-16T02:08:47.563558690Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:08:47.567990 containerd[2088]: time="2025-12-16T02:08:47.567853829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:08:47.567990 containerd[2088]: time="2025-12-16T02:08:47.567950776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:08:47.569560 kubelet[3639]: E1216 02:08:47.568242 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:08:47.569560 kubelet[3639]: E1216 02:08:47.568296 3639 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:08:47.569560 kubelet[3639]: E1216 02:08:47.568412 3639 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7pm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69677bd74f-b42tq_calico-apiserver(5f7514dc-37b5-4bac-8d4e-04c87fb3f679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:08:47.569954 kubelet[3639]: E1216 02:08:47.569864 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69677bd74f-b42tq" podUID="5f7514dc-37b5-4bac-8d4e-04c87fb3f679" Dec 16 02:08:47.648000 audit[6266]: USER_ACCT pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.653939 sshd[6266]: Accepted publickey for core from 10.200.16.10 port 52194 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:47.668817 kernel: audit: type=1101 audit(1765850927.648:917): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.669000 audit[6266]: CRED_ACQ pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.671902 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:47.696093 systemd-logind[2060]: New session 26 of user core. Dec 16 02:08:47.697976 kernel: audit: type=1103 audit(1765850927.669:918): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.698037 kernel: audit: type=1006 audit(1765850927.669:919): pid=6266 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 02:08:47.669000 audit[6266]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebf29f80 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:47.716229 kernel: audit: type=1300 audit(1765850927.669:919): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebf29f80 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:47.717324 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 02:08:47.669000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:47.726963 kernel: audit: type=1327 audit(1765850927.669:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:47.721000 audit[6266]: USER_START pid=6266 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.748025 kernel: audit: type=1105 audit(1765850927.721:920): pid=6266 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.723000 audit[6270]: CRED_ACQ pid=6270 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.764442 kernel: audit: type=1103 audit(1765850927.723:921): pid=6270 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.951034 sshd[6270]: Connection closed by 10.200.16.10 port 52194 Dec 16 02:08:47.951994 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:47.953000 audit[6266]: USER_END pid=6266 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.957186 systemd-logind[2060]: Session 26 logged out. Waiting for processes to exit. Dec 16 02:08:47.958051 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:52194.service: Deactivated successfully. Dec 16 02:08:47.961076 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 02:08:47.964406 systemd-logind[2060]: Removed session 26. Dec 16 02:08:47.953000 audit[6266]: CRED_DISP pid=6266 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.994135 kernel: audit: type=1106 audit(1765850927.953:922): pid=6266 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.994293 kernel: audit: type=1104 audit(1765850927.953:923): pid=6266 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:47.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.37:22-10.200.16.10:52194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:48.292578 kubelet[3639]: E1216 02:08:48.291766 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9cf5fb98-qdzq2" podUID="81a4ac18-79c3-4965-a33a-757950d44671" Dec 16 02:08:48.293513 kubelet[3639]: E1216 02:08:48.293486 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s52v5" podUID="f469a490-fb18-4868-b583-cd075b9a892c" Dec 16 02:08:53.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.37:22-10.200.16.10:50740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:53.041032 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:50740.service - OpenSSH per-connection server daemon (10.200.16.10:50740). Dec 16 02:08:53.044200 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:08:53.044383 kernel: audit: type=1130 audit(1765850933.040:925): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.37:22-10.200.16.10:50740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:53.480000 audit[6282]: USER_ACCT pid=6282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.496978 sshd[6282]: Accepted publickey for core from 10.200.16.10 port 50740 ssh2: RSA SHA256:q0d+t8NrnEkvYDvKISZf6dOJMBNkfgNCAz4kbngtSmM Dec 16 02:08:53.498890 sshd-session[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:53.497000 audit[6282]: CRED_ACQ pid=6282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.507315 systemd-logind[2060]: New session 27 of user core. Dec 16 02:08:53.515797 kernel: audit: type=1101 audit(1765850933.480:926): pid=6282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.515896 kernel: audit: type=1103 audit(1765850933.497:927): pid=6282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.525940 kernel: audit: type=1006 audit(1765850933.497:928): pid=6282 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 02:08:53.497000 audit[6282]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd281a10 a2=3 a3=0 items=0 ppid=1 pid=6282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.528097 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 02:08:53.543670 kernel: audit: type=1300 audit(1765850933.497:928): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd281a10 a2=3 a3=0 items=0 ppid=1 pid=6282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.497000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:53.552677 kernel: audit: type=1327 audit(1765850933.497:928): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:53.532000 audit[6282]: USER_START pid=6282 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.571938 kernel: audit: type=1105 audit(1765850933.532:929): pid=6282 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.545000 audit[6286]: CRED_ACQ pid=6286 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.590370 kernel: audit: type=1103 audit(1765850933.545:930): pid=6286 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.784771 sshd[6286]: Connection closed by 10.200.16.10 port 50740 Dec 16 02:08:53.785366 sshd-session[6282]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:53.785000 audit[6282]: USER_END pid=6282 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.790893 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:50740.service: Deactivated successfully. Dec 16 02:08:53.795024 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 02:08:53.798376 systemd-logind[2060]: Session 27 logged out. Waiting for processes to exit. Dec 16 02:08:53.803460 systemd-logind[2060]: Removed session 27. Dec 16 02:08:53.785000 audit[6282]: CRED_DISP pid=6282 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.821559 kernel: audit: type=1106 audit(1765850933.785:931): pid=6282 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.821690 kernel: audit: type=1104 audit(1765850933.785:932): pid=6282 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 02:08:53.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.37:22-10.200.16.10:50740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'