Dec 16 12:44:21.355504 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:44:21.355524 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 16 12:44:21.355531 kernel: KASLR enabled Dec 16 12:44:21.355535 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:44:21.355540 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:44:21.355544 kernel: efi: EFI v2.7 by EDK II Dec 16 12:44:21.355550 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89c018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:44:21.355554 kernel: random: crng init done Dec 16 12:44:21.355558 kernel: secureboot: Secure boot disabled Dec 16 12:44:21.355562 kernel: ACPI: Early table checksum verification disabled Dec 16 12:44:21.355566 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:44:21.355571 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:21.355575 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:21.355580 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:44:21.355585 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:21.355590 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:21.355594 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:21.355600 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:21.355604 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:21.355609 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:21.355613 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:44:21.355618 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:21.355622 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:44:21.355626 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:44:21.355631 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:44:21.355635 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:44:21.355640 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:44:21.355645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:44:21.355650 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:44:21.355654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:44:21.355659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:44:21.355663 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:44:21.355668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:44:21.355672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:44:21.355677 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:44:21.355681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:44:21.355686 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:44:21.355690 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:44:21.355696 kernel: Zone ranges: Dec 16 12:44:21.355700 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:44:21.355707 kernel: DMA32 empty Dec 16 12:44:21.355711 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:44:21.355716 kernel: Device empty Dec 16 12:44:21.355722 kernel: Movable zone start for each node Dec 16 12:44:21.355726 kernel: Early memory node ranges Dec 16 12:44:21.355731 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:44:21.355736 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:44:21.355740 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:44:21.355745 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:44:21.355749 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:44:21.355754 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:44:21.355759 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:44:21.355764 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:44:21.355769 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:44:21.355774 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:44:21.355778 kernel: psci: probing for conduit method from ACPI. Dec 16 12:44:21.355783 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:44:21.355787 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:44:21.355792 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:44:21.355797 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:44:21.355801 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:44:21.355806 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:44:21.355811 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:44:21.355815 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:44:21.355821 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:44:21.355826 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:44:21.355831 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:44:21.355835 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:44:21.355840 kernel: CPU features: detected: Spectre-v4 Dec 16 12:44:21.355845 kernel: CPU features: detected: Spectre-BHB Dec 16 12:44:21.355849 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:44:21.355854 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:44:21.355859 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:44:21.355863 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:44:21.355882 kernel: alternatives: applying boot alternatives Dec 16 12:44:21.355888 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:44:21.355893 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:44:21.355898 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:44:21.355903 kernel: Fallback order for Node 0: 0 Dec 16 12:44:21.355907 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:44:21.355912 kernel: Policy zone: Normal Dec 16 12:44:21.355916 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:44:21.355921 kernel: software IO TLB: area num 2. Dec 16 12:44:21.355926 kernel: software IO TLB: mapped [mem 0x0000000037380000-0x000000003b380000] (64MB) Dec 16 12:44:21.355930 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:44:21.355936 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:44:21.355942 kernel: rcu: RCU event tracing is enabled. Dec 16 12:44:21.355947 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:44:21.355951 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:44:21.355956 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:44:21.355961 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:44:21.355966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:44:21.355970 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:44:21.355975 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:44:21.355980 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:44:21.355984 kernel: GICv3: 960 SPIs implemented Dec 16 12:44:21.355990 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:44:21.355995 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:44:21.355999 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:44:21.356004 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:44:21.356009 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:44:21.356013 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:44:21.356018 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:44:21.356023 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:44:21.356027 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:44:21.356032 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:44:21.356037 kernel: Console: colour dummy device 80x25 Dec 16 12:44:21.356043 kernel: printk: legacy console [tty1] enabled Dec 16 12:44:21.356048 kernel: ACPI: Core revision 20240827 Dec 16 12:44:21.356053 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:44:21.356058 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:44:21.356063 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:44:21.356068 kernel: landlock: Up and running. Dec 16 12:44:21.356072 kernel: SELinux: Initializing. Dec 16 12:44:21.356078 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:44:21.356083 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:44:21.356088 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:44:21.356093 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:44:21.356102 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:44:21.356108 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:44:21.356113 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:44:21.356118 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:44:21.356123 kernel: Remapping and enabling EFI services. Dec 16 12:44:21.356129 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:44:21.356135 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:44:21.356140 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:44:21.356145 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:44:21.356151 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:44:21.356156 kernel: SMP: Total of 2 processors activated. Dec 16 12:44:21.356161 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:44:21.356166 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:44:21.356172 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:44:21.356177 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:44:21.356182 kernel: CPU features: detected: Common not Private translations Dec 16 12:44:21.356188 kernel: CPU features: detected: CRC32 instructions Dec 16 12:44:21.356194 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:44:21.356199 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:44:21.356204 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:44:21.356209 kernel: CPU features: detected: Privileged Access Never Dec 16 12:44:21.356214 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:44:21.356219 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:44:21.356226 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:44:21.356231 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:44:21.356236 kernel: alternatives: applying system-wide alternatives Dec 16 12:44:21.356241 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:44:21.356246 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:44:21.356251 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:44:21.356257 kernel: Memory: 3979964K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 193008K reserved, 16384K cma-reserved) Dec 16 12:44:21.356263 kernel: devtmpfs: initialized Dec 16 12:44:21.356268 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:44:21.356273 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:44:21.356279 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:44:21.356284 kernel: 0 pages in range for non-PLT usage Dec 16 12:44:21.356289 kernel: 515184 pages in range for PLT usage Dec 16 12:44:21.356294 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:44:21.356299 kernel: SMBIOS 3.1.0 present. Dec 16 12:44:21.356305 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:44:21.356311 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:44:21.356316 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:44:21.356321 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:44:21.356326 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:44:21.356332 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:44:21.356337 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:44:21.356344 kernel: audit: type=2000 audit(0.061:1): state=initialized audit_enabled=0 res=1 Dec 16 12:44:21.356349 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:44:21.356354 kernel: cpuidle: using governor menu Dec 16 12:44:21.356359 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:44:21.356364 kernel: ASID allocator initialised with 32768 entries Dec 16 12:44:21.356369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:44:21.356375 kernel: Serial: AMBA PL011 UART driver Dec 16 12:44:21.356381 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:44:21.356386 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:44:21.356391 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:44:21.356396 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:44:21.356401 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:44:21.356407 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:44:21.356412 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:44:21.356418 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:44:21.356423 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:44:21.356428 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:44:21.356433 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:44:21.356438 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:44:21.356443 kernel: ACPI: Interpreter enabled Dec 16 12:44:21.356448 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:44:21.356454 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:44:21.356460 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:44:21.356465 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:44:21.356470 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:44:21.356475 kernel: ACPI: CPU0 has been hot-added Dec 16 12:44:21.356480 kernel: ACPI: CPU1 has been hot-added Dec 16 12:44:21.356485 kernel: iommu: Default domain type: Translated Dec 16 12:44:21.356491 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:44:21.356497 kernel: efivars: Registered efivars operations Dec 16 12:44:21.356502 kernel: vgaarb: loaded Dec 16 12:44:21.356507 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:44:21.356512 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:44:21.356517 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:44:21.356522 kernel: pnp: PnP ACPI init Dec 16 12:44:21.356527 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:44:21.356533 kernel: NET: Registered PF_INET protocol family Dec 16 12:44:21.356538 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:44:21.356544 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:44:21.356549 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:44:21.356554 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:44:21.356559 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:44:21.356564 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:44:21.356570 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:44:21.356576 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:44:21.356581 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:44:21.356586 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:44:21.356591 kernel: kvm [1]: HYP mode not available Dec 16 12:44:21.356596 kernel: Initialise system trusted keyrings Dec 16 12:44:21.356601 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:44:21.356607 kernel: Key type asymmetric registered Dec 16 12:44:21.356612 kernel: Asymmetric key parser 'x509' registered Dec 16 12:44:21.356618 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:44:21.356623 kernel: io scheduler mq-deadline registered Dec 16 12:44:21.356628 kernel: io scheduler kyber registered Dec 16 12:44:21.356633 kernel: io scheduler bfq registered Dec 16 12:44:21.356638 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:44:21.356645 kernel: thunder_xcv, ver 1.0 Dec 16 12:44:21.356650 kernel: thunder_bgx, ver 1.0 Dec 16 12:44:21.356655 kernel: nicpf, ver 1.0 Dec 16 12:44:21.356660 kernel: nicvf, ver 1.0 Dec 16 12:44:21.356801 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:44:21.358914 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:44:18 UTC (1765889058) Dec 16 12:44:21.358946 kernel: efifb: probing for efifb Dec 16 12:44:21.358953 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:44:21.358959 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:44:21.358964 kernel: efifb: scrolling: redraw Dec 16 12:44:21.358970 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:44:21.358975 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:44:21.358980 kernel: fb0: EFI VGA frame buffer device Dec 16 12:44:21.358987 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:44:21.358992 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:44:21.358998 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:44:21.359003 kernel: watchdog: NMI not fully supported Dec 16 12:44:21.359008 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:44:21.359014 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:44:21.359019 kernel: Segment Routing with IPv6 Dec 16 12:44:21.359028 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:44:21.359033 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:44:21.359039 kernel: Key type dns_resolver registered Dec 16 12:44:21.359044 kernel: registered taskstats version 1 Dec 16 12:44:21.359050 kernel: Loading compiled-in X.509 certificates Dec 16 12:44:21.359055 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 16 12:44:21.359060 kernel: Demotion targets for Node 0: null Dec 16 12:44:21.359066 kernel: Key type .fscrypt registered Dec 16 12:44:21.359072 kernel: Key type fscrypt-provisioning registered Dec 16 12:44:21.359077 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:44:21.359083 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:44:21.359088 kernel: ima: No architecture policies found Dec 16 12:44:21.359093 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:44:21.359098 kernel: clk: Disabling unused clocks Dec 16 12:44:21.359104 kernel: PM: genpd: Disabling unused power domains Dec 16 12:44:21.359110 kernel: Freeing unused kernel memory: 12416K Dec 16 12:44:21.359115 kernel: Run /init as init process Dec 16 12:44:21.359121 kernel: with arguments: Dec 16 12:44:21.359126 kernel: /init Dec 16 12:44:21.359131 kernel: with environment: Dec 16 12:44:21.359136 kernel: HOME=/ Dec 16 12:44:21.359142 kernel: TERM=linux Dec 16 12:44:21.359148 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:44:21.359153 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:44:21.359159 kernel: SCSI subsystem initialized Dec 16 12:44:21.359164 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:44:21.359304 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:44:21.359313 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:44:21.359320 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:44:21.359326 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:44:21.359331 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:44:21.359337 kernel: PTP clock support registered Dec 16 12:44:21.359342 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:44:21.359347 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:44:21.359353 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:44:21.359359 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:44:21.359364 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:44:21.359370 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:44:21.359474 kernel: scsi host0: storvsc_host_t Dec 16 12:44:21.359556 kernel: scsi host1: storvsc_host_t Dec 16 12:44:21.359646 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:44:21.359730 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:44:21.359813 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:44:21.359901 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:44:21.359978 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 12:44:21.360052 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:44:21.360126 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:44:21.360211 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:44:21.360279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:44:21.360286 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:44:21.360358 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 12:44:21.360431 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 12:44:21.360440 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:44:21.360511 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:44:21.360518 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:44:21.360523 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:44:21.360529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:44:21.360534 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:44:21.360539 kernel: raid6: neonx8 gen() 18561 MB/s Dec 16 12:44:21.360545 kernel: raid6: neonx4 gen() 18574 MB/s Dec 16 12:44:21.360551 kernel: raid6: neonx2 gen() 17072 MB/s Dec 16 12:44:21.360556 kernel: raid6: neonx1 gen() 15133 MB/s Dec 16 12:44:21.360561 kernel: raid6: int64x8 gen() 10549 MB/s Dec 16 12:44:21.360566 kernel: raid6: int64x4 gen() 10617 MB/s Dec 16 12:44:21.360571 kernel: raid6: int64x2 gen() 8985 MB/s Dec 16 12:44:21.360577 kernel: raid6: int64x1 gen() 7051 MB/s Dec 16 12:44:21.360582 kernel: raid6: using algorithm neonx4 gen() 18574 MB/s Dec 16 12:44:21.360588 kernel: raid6: .... xor() 15142 MB/s, rmw enabled Dec 16 12:44:21.360594 kernel: raid6: using neon recovery algorithm Dec 16 12:44:21.360599 kernel: xor: measuring software checksum speed Dec 16 12:44:21.360605 kernel: 8regs : 28607 MB/sec Dec 16 12:44:21.360610 kernel: 32regs : 28812 MB/sec Dec 16 12:44:21.360615 kernel: arm64_neon : 37405 MB/sec Dec 16 12:44:21.360621 kernel: xor: using function: arm64_neon (37405 MB/sec) Dec 16 12:44:21.360627 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:44:21.360632 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (368) Dec 16 12:44:21.360638 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 16 12:44:21.360643 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:21.360648 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:44:21.360654 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:44:21.360659 kernel: loop: module loaded Dec 16 12:44:21.360666 kernel: loop0: detected capacity change from 0 to 91480 Dec 16 12:44:21.360671 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:44:21.360677 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:44:21.360685 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:44:21.360691 systemd[1]: Detected virtualization microsoft. Dec 16 12:44:21.360697 systemd[1]: Detected architecture arm64. Dec 16 12:44:21.360703 systemd[1]: Running in initrd. Dec 16 12:44:21.360709 systemd[1]: No hostname configured, using default hostname. Dec 16 12:44:21.360714 systemd[1]: Hostname set to . Dec 16 12:44:21.360720 systemd[1]: Initializing machine ID from random generator. Dec 16 12:44:21.360726 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:44:21.360731 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:44:21.360738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:21.360744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:21.360751 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:44:21.360757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:44:21.360763 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:44:21.360769 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:44:21.360776 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:21.360782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:21.360788 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:44:21.360793 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:44:21.360799 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:44:21.360805 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:44:21.360811 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:44:21.360817 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:44:21.360823 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:44:21.360828 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:44:21.360834 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:44:21.360840 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:44:21.360846 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:21.360857 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:21.360864 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:21.360879 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:44:21.360886 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:44:21.360892 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:44:21.360900 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:44:21.360906 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:44:21.360912 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:44:21.360918 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:44:21.360924 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:44:21.360929 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:44:21.360956 systemd-journald[505]: Collecting audit messages is enabled. Dec 16 12:44:21.360972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:21.360979 systemd-journald[505]: Journal started Dec 16 12:44:21.360994 systemd-journald[505]: Runtime Journal (/run/log/journal/cdf0a665b2254640a19800d0ee3b88f5) is 8M, max 78.3M, 70.3M free. Dec 16 12:44:21.386079 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:44:21.381202 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:44:21.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.386479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:21.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.419811 kernel: audit: type=1130 audit(1765889061.379:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.419838 kernel: audit: type=1130 audit(1765889061.385:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.426078 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:44:21.461118 kernel: audit: type=1130 audit(1765889061.424:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.461143 kernel: audit: type=1130 audit(1765889061.441:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.444186 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:44:21.481947 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:44:21.482303 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:44:21.535388 systemd-modules-load[508]: Inserted module 'br_netfilter' Dec 16 12:44:21.540139 kernel: Bridge firewalling registered Dec 16 12:44:21.540786 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:21.564984 kernel: audit: type=1130 audit(1765889061.549:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.547736 systemd-tmpfiles[519]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:44:21.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.553053 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:44:21.593517 kernel: audit: type=1130 audit(1765889061.568:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.575916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:21.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.615972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:21.639902 kernel: audit: type=1130 audit(1765889061.597:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.639926 kernel: audit: type=1130 audit(1765889061.620:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.639049 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:44:21.653015 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:44:21.672370 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:44:21.685178 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:21.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.708118 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:44:21.724991 kernel: audit: type=1130 audit(1765889061.694:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.706000 audit: BPF prog-id=6 op=LOAD Dec 16 12:44:21.723218 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:21.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.738801 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:44:21.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.753962 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:44:21.848391 systemd-resolved[534]: Positive Trust Anchors: Dec 16 12:44:21.848403 systemd-resolved[534]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:44:21.869745 dracut-cmdline[546]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:44:21.848406 systemd-resolved[534]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:44:21.848425 systemd-resolved[534]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:44:21.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:21.899630 systemd-resolved[534]: Defaulting to hostname 'linux'. Dec 16 12:44:21.904159 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:44:21.920057 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:22.016900 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:44:22.057905 kernel: iscsi: registered transport (tcp) Dec 16 12:44:22.089146 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:44:22.089212 kernel: QLogic iSCSI HBA Driver Dec 16 12:44:22.148121 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:44:22.173419 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:22.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.181405 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:44:22.234039 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:44:22.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.242037 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:44:22.254805 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:44:22.295907 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:44:22.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.304000 audit: BPF prog-id=7 op=LOAD Dec 16 12:44:22.304000 audit: BPF prog-id=8 op=LOAD Dec 16 12:44:22.307188 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:22.382115 systemd-udevd[786]: Using default interface naming scheme 'v257'. Dec 16 12:44:22.386937 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:44:22.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.393126 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:22.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.410397 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:44:22.427000 audit: BPF prog-id=9 op=LOAD Dec 16 12:44:22.429008 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:44:22.447924 dracut-pre-trigger[888]: rd.md=0: removing MD RAID activation Dec 16 12:44:22.476289 systemd-networkd[890]: lo: Link UP Dec 16 12:44:22.476295 systemd-networkd[890]: lo: Gained carrier Dec 16 12:44:22.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.479552 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:44:22.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.486912 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:44:22.493705 systemd[1]: Reached target network.target - Network. Dec 16 12:44:22.504076 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:44:22.558029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:22.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.572530 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:44:22.637916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:44:22.663088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:22.694341 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 16 12:44:22.694365 kernel: audit: type=1131 audit(1765889062.673:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.694375 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:44:22.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.663198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:22.674371 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:22.702121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:22.727815 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:22.727912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:22.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.767714 kernel: audit: type=1130 audit(1765889062.736:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.767741 kernel: audit: type=1131 audit(1765889062.736:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.757844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:22.794894 kernel: hv_netvsc 002248b8-77b8-0022-48b8-77b8002248b8 eth0: VF slot 1 added Dec 16 12:44:22.798238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:22.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.823858 kernel: audit: type=1130 audit(1765889062.803:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:22.822595 systemd-networkd[890]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:22.822598 systemd-networkd[890]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:22.842579 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:44:22.834047 systemd-networkd[890]: eth0: Link UP Dec 16 12:44:22.834124 systemd-networkd[890]: eth0: Gained carrier Dec 16 12:44:22.853825 kernel: hv_pci 1cd5920a-313b-41a7-84c1-9ebb2251f384: PCI VMBus probing: Using version 0x10004 Dec 16 12:44:22.834138 systemd-networkd[890]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:22.872273 kernel: hv_pci 1cd5920a-313b-41a7-84c1-9ebb2251f384: PCI host bridge to bus 313b:00 Dec 16 12:44:22.872493 kernel: pci_bus 313b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:44:22.872947 systemd-networkd[890]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:44:22.883436 kernel: pci_bus 313b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:44:22.890131 kernel: pci 313b:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:44:22.895911 kernel: pci 313b:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:44:22.899911 kernel: pci 313b:00:02.0: enabling Extended Tags Dec 16 12:44:22.917976 kernel: pci 313b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 313b:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:44:22.929229 kernel: pci_bus 313b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:44:22.929478 kernel: pci 313b:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:44:23.061181 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:44:23.073662 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:44:23.139694 kernel: mlx5_core 313b:00:02.0: enabling device (0000 -> 0002) Dec 16 12:44:23.149191 kernel: mlx5_core 313b:00:02.0: PTM is not supported by PCIe Dec 16 12:44:23.149431 kernel: mlx5_core 313b:00:02.0: firmware version: 16.30.5006 Dec 16 12:44:23.164226 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:44:23.205746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:44:23.236558 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:44:23.352889 kernel: hv_netvsc 002248b8-77b8-0022-48b8-77b8002248b8 eth0: VF registering: eth1 Dec 16 12:44:23.353134 kernel: mlx5_core 313b:00:02.0 eth1: joined to eth0 Dec 16 12:44:23.359907 kernel: mlx5_core 313b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:44:23.371108 kernel: mlx5_core 313b:00:02.0 enP12603s1: renamed from eth1 Dec 16 12:44:23.370587 systemd-networkd[890]: eth1: Interface name change detected, renamed to enP12603s1. Dec 16 12:44:23.398813 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:44:23.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:23.404513 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:44:23.438246 kernel: audit: type=1130 audit(1765889063.403:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:23.432643 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:23.438351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:44:23.449188 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:44:23.473371 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:44:23.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:23.495885 kernel: audit: type=1130 audit(1765889063.477:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:23.529904 kernel: mlx5_core 313b:00:02.0 enP12603s1: Link up Dec 16 12:44:23.571160 systemd-networkd[890]: enP12603s1: Link UP Dec 16 12:44:23.574543 kernel: hv_netvsc 002248b8-77b8-0022-48b8-77b8002248b8 eth0: Data path switched to VF: enP12603s1 Dec 16 12:44:23.827092 systemd-networkd[890]: enP12603s1: Gained carrier Dec 16 12:44:24.250705 disk-uuid[996]: Warning: The kernel is still using the old partition table. Dec 16 12:44:24.250705 disk-uuid[996]: The new table will be used at the next reboot or after you Dec 16 12:44:24.250705 disk-uuid[996]: run partprobe(8) or kpartx(8) Dec 16 12:44:24.250705 disk-uuid[996]: The operation has completed successfully. Dec 16 12:44:24.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:24.260457 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:44:24.301859 kernel: audit: type=1130 audit(1765889064.268:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:24.301922 kernel: audit: type=1131 audit(1765889064.268:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:24.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:24.260565 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:44:24.284690 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:44:24.354910 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1115) Dec 16 12:44:24.354970 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:24.365630 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:24.389582 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:44:24.389598 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:44:24.399938 kernel: BTRFS info (device sda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:24.400279 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:44:24.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:24.406955 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:44:24.430166 kernel: audit: type=1130 audit(1765889064.404:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:24.843097 systemd-networkd[890]: eth0: Gained IPv6LL Dec 16 12:44:25.370780 ignition[1134]: Ignition 2.22.0 Dec 16 12:44:25.373611 ignition[1134]: Stage: fetch-offline Dec 16 12:44:25.373750 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:25.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:25.375884 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:44:25.406655 kernel: audit: type=1130 audit(1765889065.382:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:25.373759 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:25.400968 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:44:25.373842 ignition[1134]: parsed url from cmdline: "" Dec 16 12:44:25.373844 ignition[1134]: no config URL provided Dec 16 12:44:25.373847 ignition[1134]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:44:25.373854 ignition[1134]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:44:25.373857 ignition[1134]: failed to fetch config: resource requires networking Dec 16 12:44:25.374101 ignition[1134]: Ignition finished successfully Dec 16 12:44:25.439994 ignition[1142]: Ignition 2.22.0 Dec 16 12:44:25.440000 ignition[1142]: Stage: fetch Dec 16 12:44:25.440198 ignition[1142]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:25.440204 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:25.440280 ignition[1142]: parsed url from cmdline: "" Dec 16 12:44:25.440282 ignition[1142]: no config URL provided Dec 16 12:44:25.440286 ignition[1142]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:44:25.440290 ignition[1142]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:44:25.440305 ignition[1142]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:44:25.541303 ignition[1142]: GET result: OK Dec 16 12:44:25.541369 ignition[1142]: config has been read from IMDS userdata Dec 16 12:44:25.541383 ignition[1142]: parsing config with SHA512: 136e7b3593792f53f10a05420ed0d143b5b8dab2d4a0cb62011f323d191d8111ffd906c8cfd54ec7e10252a7f2316a0aea3bb569ce14628a48113844a73f1958 Dec 16 12:44:25.547984 unknown[1142]: fetched base config from "system" Dec 16 12:44:25.547995 unknown[1142]: fetched base config from "system" Dec 16 12:44:25.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:25.548215 ignition[1142]: fetch: fetch complete Dec 16 12:44:25.547999 unknown[1142]: fetched user config from "azure" Dec 16 12:44:25.548218 ignition[1142]: fetch: fetch passed Dec 16 12:44:25.550217 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:44:25.548262 ignition[1142]: Ignition finished successfully Dec 16 12:44:25.557352 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:44:25.594905 ignition[1149]: Ignition 2.22.0 Dec 16 12:44:25.594919 ignition[1149]: Stage: kargs Dec 16 12:44:25.601235 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:44:25.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:25.595121 ignition[1149]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:25.606688 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:44:25.595129 ignition[1149]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:25.595761 ignition[1149]: kargs: kargs passed Dec 16 12:44:25.595820 ignition[1149]: Ignition finished successfully Dec 16 12:44:25.639458 ignition[1155]: Ignition 2.22.0 Dec 16 12:44:25.639474 ignition[1155]: Stage: disks Dec 16 12:44:25.645140 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:44:25.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:25.639672 ignition[1155]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:25.651376 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:44:25.639679 ignition[1155]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:25.660234 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:44:25.640409 ignition[1155]: disks: disks passed Dec 16 12:44:25.669854 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:44:25.640476 ignition[1155]: Ignition finished successfully Dec 16 12:44:25.679043 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:44:25.688043 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:44:25.698572 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:44:25.823596 systemd-fsck[1164]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 12:44:25.832597 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:44:25.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:25.840114 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:44:26.092892 kernel: EXT4-fs (sda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 16 12:44:26.093540 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:44:26.097857 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:44:26.135653 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:44:26.144100 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:44:26.159089 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:44:26.172427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:44:26.172495 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:44:26.192904 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:44:26.207066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:44:26.229912 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1178) Dec 16 12:44:26.242989 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:26.243051 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:26.253940 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:44:26.254013 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:44:26.256228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:44:26.916511 coreos-metadata[1180]: Dec 16 12:44:26.916 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:44:26.927045 coreos-metadata[1180]: Dec 16 12:44:26.927 INFO Fetch successful Dec 16 12:44:26.932482 coreos-metadata[1180]: Dec 16 12:44:26.932 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:44:26.943023 coreos-metadata[1180]: Dec 16 12:44:26.942 INFO Fetch successful Dec 16 12:44:26.956772 coreos-metadata[1180]: Dec 16 12:44:26.956 INFO wrote hostname ci-4515.1.0-a-6d618b7fe6 to /sysroot/etc/hostname Dec 16 12:44:26.965721 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:44:26.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:27.126899 initrd-setup-root[1209]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:44:27.158797 initrd-setup-root[1216]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:44:27.166966 initrd-setup-root[1223]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:44:27.175713 initrd-setup-root[1230]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:44:28.039405 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:44:28.054475 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 16 12:44:28.054533 kernel: audit: type=1130 audit(1765889068.048:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.051444 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:44:28.081628 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:44:28.115289 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:44:28.126957 kernel: BTRFS info (device sda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:28.140653 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:44:28.169309 kernel: audit: type=1130 audit(1765889068.147:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.171124 ignition[1299]: INFO : Ignition 2.22.0 Dec 16 12:44:28.171124 ignition[1299]: INFO : Stage: mount Dec 16 12:44:28.180983 ignition[1299]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:28.180983 ignition[1299]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:28.180983 ignition[1299]: INFO : mount: mount passed Dec 16 12:44:28.180983 ignition[1299]: INFO : Ignition finished successfully Dec 16 12:44:28.221300 kernel: audit: type=1130 audit(1765889068.183:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:28.179786 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:44:28.208409 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:44:28.225117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:44:28.258890 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1310) Dec 16 12:44:28.273705 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:28.273762 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:28.286262 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:44:28.286329 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:44:28.288021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:44:28.319917 ignition[1327]: INFO : Ignition 2.22.0 Dec 16 12:44:28.319917 ignition[1327]: INFO : Stage: files Dec 16 12:44:28.319917 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:28.319917 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:28.339902 ignition[1327]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:44:28.339902 ignition[1327]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:44:28.339902 ignition[1327]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:44:28.409464 ignition[1327]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:44:28.415703 ignition[1327]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:44:28.415703 ignition[1327]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:44:28.409918 unknown[1327]: wrote ssh authorized keys file for user: core Dec 16 12:44:28.446476 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:44:28.456482 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:44:28.484767 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:44:28.594261 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:44:28.594261 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:44:28.594261 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:44:28.594261 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:44:28.594261 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:44:28.594261 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:44:28.594261 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:44:28.594261 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:44:28.594261 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:44:28.678837 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:44:28.678837 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:44:28.678837 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:44:28.678837 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:44:28.678837 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:44:28.678837 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 12:44:29.153357 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:44:29.378885 ignition[1327]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:44:29.378885 ignition[1327]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:44:29.415424 ignition[1327]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:44:29.429191 ignition[1327]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:44:29.429191 ignition[1327]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:44:29.429191 ignition[1327]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:44:29.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.471165 ignition[1327]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:44:29.471165 ignition[1327]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:44:29.471165 ignition[1327]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:44:29.471165 ignition[1327]: INFO : files: files passed Dec 16 12:44:29.471165 ignition[1327]: INFO : Ignition finished successfully Dec 16 12:44:29.510776 kernel: audit: type=1130 audit(1765889069.450:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.439710 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:44:29.452176 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:44:29.509730 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:44:29.522012 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:44:29.537660 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:44:29.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.572949 kernel: audit: type=1130 audit(1765889069.546:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.572986 kernel: audit: type=1131 audit(1765889069.546:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.579752 initrd-setup-root-after-ignition[1358]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:29.579752 initrd-setup-root-after-ignition[1358]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:29.593707 initrd-setup-root-after-ignition[1362]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:29.624199 kernel: audit: type=1130 audit(1765889069.598:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.590967 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:44:29.599772 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:44:29.625895 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:44:29.675746 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:44:29.679999 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:44:29.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.686162 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:44:29.723230 kernel: audit: type=1130 audit(1765889069.685:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.723252 kernel: audit: type=1131 audit(1765889069.685:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.721981 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:44:29.727810 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:44:29.733055 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:44:29.764740 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:44:29.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.788211 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:44:29.799699 kernel: audit: type=1130 audit(1765889069.769:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.807807 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:44:29.807980 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:29.818110 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:29.828038 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:44:29.836010 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:44:29.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.836142 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:44:29.848634 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:44:29.853243 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:44:29.862035 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:44:29.872407 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:44:29.881697 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:44:29.892871 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:44:29.904582 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:44:29.914682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:44:29.925165 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:44:29.933359 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:44:29.942338 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:44:29.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.949453 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:44:29.949570 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:44:29.960813 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:29.969127 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:29.978567 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:44:30.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.978630 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:30.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.988751 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:44:30.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:29.988861 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:44:30.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.036972 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:44:30.037078 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:44:30.042732 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:44:30.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.042809 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:44:30.052772 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:44:30.052856 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:44:30.065014 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:44:30.081382 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:44:30.081578 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:30.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.152417 ignition[1382]: INFO : Ignition 2.22.0 Dec 16 12:44:30.152417 ignition[1382]: INFO : Stage: umount Dec 16 12:44:30.152417 ignition[1382]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:30.152417 ignition[1382]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:30.152417 ignition[1382]: INFO : umount: umount passed Dec 16 12:44:30.152417 ignition[1382]: INFO : Ignition finished successfully Dec 16 12:44:30.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.112118 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:44:30.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.133495 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:44:30.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.133677 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:30.151153 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:44:30.151270 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:30.162408 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:44:30.162562 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:44:30.177811 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:44:30.177932 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:44:30.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.188765 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:44:30.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.189024 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:44:30.196526 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:44:30.196592 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:44:30.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.205648 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:44:30.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.205698 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:44:30.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.210930 systemd[1]: Stopped target network.target - Network. Dec 16 12:44:30.376000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:44:30.376000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:44:30.215634 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:44:30.215698 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:44:30.226468 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:44:30.237167 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:44:30.243838 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:30.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.252134 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:44:30.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.262825 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:44:30.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.274367 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:44:30.274420 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:44:30.279258 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:44:30.279286 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:44:30.288797 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:44:30.288815 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:44:30.299846 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:44:30.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.299907 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:44:30.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.309129 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:44:30.309167 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:44:30.545570 kernel: hv_netvsc 002248b8-77b8-0022-48b8-77b8002248b8 eth0: Data path switched from VF: enP12603s1 Dec 16 12:44:30.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.319255 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:44:30.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.324633 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:44:30.334501 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:44:30.335096 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:44:30.335202 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:44:30.350473 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:44:30.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.350556 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:44:30.364452 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:44:30.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.364560 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:44:30.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.378183 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:44:30.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.384521 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:44:30.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.384570 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:30.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.396201 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:44:30.411977 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:44:30.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:30.412071 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:44:30.423266 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:44:30.423321 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:30.435467 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:44:30.435521 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:30.450115 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:30.486692 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:44:30.486865 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:30.498504 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:44:30.498544 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:30.504018 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:44:30.504049 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:30.509516 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:44:30.509568 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:44:30.531532 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:44:30.531616 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:44:30.545669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:44:30.545765 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:44:30.563039 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:44:30.577214 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:44:30.577313 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:30.587619 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:44:30.587675 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:30.603346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:30.603409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:30.792589 systemd-journald[505]: Received SIGTERM from PID 1 (systemd). Dec 16 12:44:30.612503 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:44:30.612598 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:44:30.621926 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:44:30.622000 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:44:30.631989 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:44:30.632064 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:44:30.641445 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:44:30.650604 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:44:30.650712 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:44:30.660583 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:44:30.691757 systemd[1]: Switching root. Dec 16 12:44:30.838376 systemd-journald[505]: Journal stopped Dec 16 12:44:35.567468 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:44:35.567490 kernel: SELinux: policy capability open_perms=1 Dec 16 12:44:35.567498 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:44:35.567504 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:44:35.567511 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:44:35.567517 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:44:35.567523 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:44:35.567529 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:44:35.567535 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:44:35.567544 systemd[1]: Successfully loaded SELinux policy in 150.383ms. Dec 16 12:44:35.567552 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.636ms. Dec 16 12:44:35.567559 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:44:35.567566 systemd[1]: Detected virtualization microsoft. Dec 16 12:44:35.567572 systemd[1]: Detected architecture arm64. Dec 16 12:44:35.567580 systemd[1]: Detected first boot. Dec 16 12:44:35.567586 systemd[1]: Hostname set to . Dec 16 12:44:35.567593 systemd[1]: Initializing machine ID from random generator. Dec 16 12:44:35.567599 zram_generator::config[1425]: No configuration found. Dec 16 12:44:35.567606 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:44:35.567613 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:44:35.567619 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 16 12:44:35.567625 kernel: audit: type=1334 audit(1765889074.570:95): prog-id=12 op=LOAD Dec 16 12:44:35.567631 kernel: audit: type=1334 audit(1765889074.570:96): prog-id=3 op=UNLOAD Dec 16 12:44:35.567637 kernel: audit: type=1334 audit(1765889074.573:97): prog-id=13 op=LOAD Dec 16 12:44:35.567643 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:44:35.567650 kernel: audit: type=1334 audit(1765889074.574:98): prog-id=14 op=LOAD Dec 16 12:44:35.567656 kernel: audit: type=1334 audit(1765889074.574:99): prog-id=4 op=UNLOAD Dec 16 12:44:35.567662 kernel: audit: type=1334 audit(1765889074.574:100): prog-id=5 op=UNLOAD Dec 16 12:44:35.567669 kernel: audit: type=1131 audit(1765889074.579:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.567675 kernel: audit: type=1334 audit(1765889074.618:102): prog-id=12 op=UNLOAD Dec 16 12:44:35.567682 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:44:35.567689 kernel: audit: type=1130 audit(1765889074.631:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.567695 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:44:35.567702 kernel: audit: type=1131 audit(1765889074.631:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.567709 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:44:35.567715 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:44:35.567722 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:44:35.567729 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:44:35.567736 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:44:35.567743 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:44:35.567751 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:44:35.567758 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:44:35.567764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:35.567772 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:35.567779 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:44:35.567785 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:44:35.567792 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:44:35.567799 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:44:35.567805 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:44:35.567812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:35.567819 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:35.567827 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:44:35.567833 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:44:35.567840 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:44:35.567846 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:44:35.567853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:35.567861 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:44:35.567867 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:44:35.567889 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:44:35.567895 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:44:35.567902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:44:35.567908 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:44:35.567916 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:44:35.567923 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:44:35.567930 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:44:35.567936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:35.567944 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:44:35.567951 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:44:35.567957 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:35.567964 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:35.567970 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:44:35.567977 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:44:35.567984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:44:35.567992 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:44:35.567998 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:44:35.568005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:44:35.568011 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:44:35.568018 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:44:35.568025 systemd[1]: Reached target machines.target - Containers. Dec 16 12:44:35.568032 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:44:35.568039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:35.568046 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:44:35.568053 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:44:35.568060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:35.568066 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:44:35.568073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:35.568081 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:44:35.568087 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:35.568094 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:44:35.568101 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:44:35.568107 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:44:35.568114 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:44:35.568120 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:44:35.568129 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:35.568136 kernel: ACPI: bus type drm_connector registered Dec 16 12:44:35.568142 kernel: fuse: init (API version 7.41) Dec 16 12:44:35.568148 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:44:35.568155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:44:35.568161 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:44:35.568168 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:44:35.568175 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:44:35.568182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:44:35.568189 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:44:35.568196 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:44:35.568202 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:44:35.568224 systemd-journald[1523]: Collecting audit messages is enabled. Dec 16 12:44:35.568240 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:44:35.568248 systemd-journald[1523]: Journal started Dec 16 12:44:35.568264 systemd-journald[1523]: Runtime Journal (/run/log/journal/8a9ad820388942c180b82e770d9b40a5) is 8M, max 78.3M, 70.3M free. Dec 16 12:44:34.986000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:44:35.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.385000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:44:35.385000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:44:35.389000 audit: BPF prog-id=15 op=LOAD Dec 16 12:44:35.389000 audit: BPF prog-id=16 op=LOAD Dec 16 12:44:35.389000 audit: BPF prog-id=17 op=LOAD Dec 16 12:44:35.560000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:44:35.560000 audit[1523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc5452530 a2=4000 a3=0 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:44:35.560000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:44:34.550402 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:44:34.575580 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:44:34.579781 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:44:34.580148 systemd[1]: systemd-journald.service: Consumed 2.808s CPU time. Dec 16 12:44:35.572926 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:44:35.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.581223 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:44:35.586019 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:44:35.590663 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:44:35.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.596243 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:35.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.601792 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:44:35.601947 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:44:35.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.607653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:35.607784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:35.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.613341 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:44:35.613482 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:44:35.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.618362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:35.618512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:35.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.624406 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:44:35.624552 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:44:35.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.629684 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:35.629820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:35.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.635046 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:35.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.640593 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:35.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.647683 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:44:35.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.653596 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:44:35.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.660621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:35.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.675349 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:44:35.681198 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:44:35.688554 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:44:35.703015 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:44:35.707942 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:44:35.707982 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:44:35.713265 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:44:35.719744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:35.719861 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:35.723037 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:44:35.729188 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:44:35.734854 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:44:35.735904 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:44:35.741505 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:44:35.742456 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:44:35.749116 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:44:35.757160 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:44:35.763392 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:44:35.770319 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:44:35.777507 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:44:35.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.785675 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:44:35.789435 systemd-journald[1523]: Time spent on flushing to /var/log/journal/8a9ad820388942c180b82e770d9b40a5 is 20.660ms for 1085 entries. Dec 16 12:44:35.789435 systemd-journald[1523]: System Journal (/var/log/journal/8a9ad820388942c180b82e770d9b40a5) is 8M, max 2.2G, 2.2G free. Dec 16 12:44:35.895032 systemd-journald[1523]: Received client request to flush runtime journal. Dec 16 12:44:35.896823 kernel: loop1: detected capacity change from 0 to 109872 Dec 16 12:44:35.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.800083 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:44:35.839972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:35.898652 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:44:35.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.913978 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:44:35.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.967537 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:44:35.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.973000 audit: BPF prog-id=18 op=LOAD Dec 16 12:44:35.973000 audit: BPF prog-id=19 op=LOAD Dec 16 12:44:35.973000 audit: BPF prog-id=20 op=LOAD Dec 16 12:44:35.975251 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:44:35.980000 audit: BPF prog-id=21 op=LOAD Dec 16 12:44:35.982687 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:44:35.989163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:44:36.025000 audit: BPF prog-id=22 op=LOAD Dec 16 12:44:36.026000 audit: BPF prog-id=23 op=LOAD Dec 16 12:44:36.026000 audit: BPF prog-id=24 op=LOAD Dec 16 12:44:36.030045 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:44:36.035000 audit: BPF prog-id=25 op=LOAD Dec 16 12:44:36.035000 audit: BPF prog-id=26 op=LOAD Dec 16 12:44:36.035000 audit: BPF prog-id=27 op=LOAD Dec 16 12:44:36.039050 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:44:36.074502 systemd-nsresourced[1584]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:44:36.076042 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:44:36.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:36.086609 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:44:36.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:36.092932 systemd-tmpfiles[1582]: ACLs are not supported, ignoring. Dec 16 12:44:36.092943 systemd-tmpfiles[1582]: ACLs are not supported, ignoring. Dec 16 12:44:36.098994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:36.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:36.148162 systemd-oomd[1580]: No swap; memory pressure usage will be degraded Dec 16 12:44:36.148962 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:44:36.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:36.179738 systemd-resolved[1581]: Positive Trust Anchors: Dec 16 12:44:36.179761 systemd-resolved[1581]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:44:36.179764 systemd-resolved[1581]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:44:36.179784 systemd-resolved[1581]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:44:36.204903 kernel: loop2: detected capacity change from 0 to 100192 Dec 16 12:44:36.260225 systemd-resolved[1581]: Using system hostname 'ci-4515.1.0-a-6d618b7fe6'. Dec 16 12:44:36.261411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:44:36.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:36.266626 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:36.579896 kernel: loop3: detected capacity change from 0 to 27736 Dec 16 12:44:36.647564 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:44:36.907897 kernel: loop4: detected capacity change from 0 to 211168 Dec 16 12:44:36.937896 kernel: loop5: detected capacity change from 0 to 109872 Dec 16 12:44:36.951900 kernel: loop6: detected capacity change from 0 to 100192 Dec 16 12:44:36.965929 kernel: loop7: detected capacity change from 0 to 27736 Dec 16 12:44:36.988908 kernel: loop1: detected capacity change from 0 to 211168 Dec 16 12:44:37.003606 (sd-merge)[1605]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 12:44:37.006453 (sd-merge)[1605]: Merged extensions into '/usr'. Dec 16 12:44:37.010074 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:44:37.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.015000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:44:37.015000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:44:37.015000 audit: BPF prog-id=28 op=LOAD Dec 16 12:44:37.015000 audit: BPF prog-id=29 op=LOAD Dec 16 12:44:37.017527 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:37.023179 systemd[1]: Reload requested from client PID 1565 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:44:37.023192 systemd[1]: Reloading... Dec 16 12:44:37.052485 systemd-udevd[1607]: Using default interface naming scheme 'v257'. Dec 16 12:44:37.094900 zram_generator::config[1654]: No configuration found. Dec 16 12:44:37.277152 systemd[1]: Reloading finished in 253 ms. Dec 16 12:44:37.297414 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:37.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.309083 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:44:37.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.336533 systemd[1]: Starting ensure-sysext.service... Dec 16 12:44:37.342000 audit: BPF prog-id=30 op=LOAD Dec 16 12:44:37.344687 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:44:37.353607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:44:37.362000 audit: BPF prog-id=31 op=LOAD Dec 16 12:44:37.362000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:44:37.362000 audit: BPF prog-id=32 op=LOAD Dec 16 12:44:37.366000 audit: BPF prog-id=33 op=LOAD Dec 16 12:44:37.366000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:44:37.366000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:44:37.367000 audit: BPF prog-id=34 op=LOAD Dec 16 12:44:37.367000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:44:37.368000 audit: BPF prog-id=35 op=LOAD Dec 16 12:44:37.368000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:44:37.368000 audit: BPF prog-id=36 op=LOAD Dec 16 12:44:37.368000 audit: BPF prog-id=37 op=LOAD Dec 16 12:44:37.368000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:44:37.368000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:44:37.369000 audit: BPF prog-id=38 op=LOAD Dec 16 12:44:37.369000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:44:37.369000 audit: BPF prog-id=39 op=LOAD Dec 16 12:44:37.369000 audit: BPF prog-id=40 op=LOAD Dec 16 12:44:37.369000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:44:37.369000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:44:37.370000 audit: BPF prog-id=41 op=LOAD Dec 16 12:44:37.370000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:44:37.370000 audit: BPF prog-id=42 op=LOAD Dec 16 12:44:37.370000 audit: BPF prog-id=43 op=LOAD Dec 16 12:44:37.370000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:44:37.370000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:44:37.370000 audit: BPF prog-id=44 op=LOAD Dec 16 12:44:37.370000 audit: BPF prog-id=45 op=LOAD Dec 16 12:44:37.370000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:44:37.370000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:44:37.376470 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:44:37.381900 systemd[1]: Reload requested from client PID 1712 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:44:37.381914 systemd[1]: Reloading... Dec 16 12:44:37.413043 systemd-tmpfiles[1716]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:44:37.413412 systemd-tmpfiles[1716]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:44:37.413651 systemd-tmpfiles[1716]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:44:37.423152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#169 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:44:37.423571 systemd-tmpfiles[1716]: ACLs are not supported, ignoring. Dec 16 12:44:37.424487 systemd-tmpfiles[1716]: ACLs are not supported, ignoring. Dec 16 12:44:37.450120 systemd-tmpfiles[1716]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:44:37.450266 systemd-tmpfiles[1716]: Skipping /boot Dec 16 12:44:37.451899 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:44:37.459340 systemd-tmpfiles[1716]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:44:37.459457 systemd-tmpfiles[1716]: Skipping /boot Dec 16 12:44:37.501276 zram_generator::config[1757]: No configuration found. Dec 16 12:44:37.512790 systemd-networkd[1715]: lo: Link UP Dec 16 12:44:37.512804 systemd-networkd[1715]: lo: Gained carrier Dec 16 12:44:37.514318 systemd-networkd[1715]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:37.514326 systemd-networkd[1715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:37.582986 kernel: mlx5_core 313b:00:02.0 enP12603s1: Link up Dec 16 12:44:37.617900 kernel: hv_netvsc 002248b8-77b8-0022-48b8-77b8002248b8 eth0: Data path switched to VF: enP12603s1 Dec 16 12:44:37.616701 systemd-networkd[1715]: enP12603s1: Link UP Dec 16 12:44:37.616829 systemd-networkd[1715]: eth0: Link UP Dec 16 12:44:37.616832 systemd-networkd[1715]: eth0: Gained carrier Dec 16 12:44:37.616848 systemd-networkd[1715]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:37.639176 systemd-networkd[1715]: enP12603s1: Gained carrier Dec 16 12:44:37.644939 systemd-networkd[1715]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:44:37.739684 systemd[1]: Reloading finished in 357 ms. Dec 16 12:44:37.748709 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:44:37.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.759000 audit: BPF prog-id=46 op=LOAD Dec 16 12:44:37.759000 audit: BPF prog-id=47 op=LOAD Dec 16 12:44:37.759000 audit: BPF prog-id=48 op=LOAD Dec 16 12:44:37.759000 audit: BPF prog-id=49 op=LOAD Dec 16 12:44:37.760000 audit: BPF prog-id=50 op=LOAD Dec 16 12:44:37.760000 audit: BPF prog-id=51 op=LOAD Dec 16 12:44:37.760000 audit: BPF prog-id=52 op=LOAD Dec 16 12:44:37.761000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:44:37.761000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:44:37.761000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:44:37.761000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:44:37.761000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:44:37.761000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:44:37.761000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:44:37.763000 audit: BPF prog-id=53 op=LOAD Dec 16 12:44:37.763000 audit: BPF prog-id=54 op=LOAD Dec 16 12:44:37.763000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:44:37.763000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:44:37.763000 audit: BPF prog-id=55 op=LOAD Dec 16 12:44:37.763000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:44:37.763000 audit: BPF prog-id=56 op=LOAD Dec 16 12:44:37.763000 audit: BPF prog-id=57 op=LOAD Dec 16 12:44:37.763000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:44:37.763000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:44:37.764000 audit: BPF prog-id=58 op=LOAD Dec 16 12:44:37.764000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:44:37.764000 audit: BPF prog-id=59 op=LOAD Dec 16 12:44:37.764000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:44:37.764000 audit: BPF prog-id=60 op=LOAD Dec 16 12:44:37.764000 audit: BPF prog-id=61 op=LOAD Dec 16 12:44:37.764000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:44:37.764000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:44:37.771097 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:37.779916 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:44:37.779995 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:44:37.780006 kernel: MACsec IEEE 802.1AE Dec 16 12:44:37.792898 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:44:37.792995 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:44:37.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.801027 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:44:37.813077 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:44:37.818634 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:44:37.826890 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:44:37.852929 systemd[1]: Finished ensure-sysext.service. Dec 16 12:44:37.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.869310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:44:37.875279 systemd[1]: Reached target network.target - Network. Dec 16 12:44:37.881356 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:44:37.896720 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:44:37.902175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:37.905058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:37.920722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:44:37.928897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:37.942637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:37.948199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:37.949027 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:37.951088 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:44:37.966184 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:44:37.971606 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:37.973389 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:44:37.982620 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:44:37.991103 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:44:37.998076 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:44:38.006111 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:44:38.015728 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:38.022641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:38.023900 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:38.023000 audit[1921]: SYSTEM_BOOT pid=1921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.034248 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:44:38.034932 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:44:38.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.040233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:38.040423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:38.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.046171 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:38.046342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:38.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.052053 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:44:38.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.066168 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:44:38.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.072382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:44:38.072445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:44:38.077929 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:44:38.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.131139 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:44:38.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.240000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:44:38.240000 audit[1943]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc5271aa0 a2=420 a3=0 items=0 ppid=1899 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:44:38.240000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:44:38.241577 augenrules[1943]: No rules Dec 16 12:44:38.242802 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:44:38.243128 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:44:38.409387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:38.626262 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:44:38.632632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:44:39.435009 systemd-networkd[1715]: eth0: Gained IPv6LL Dec 16 12:44:39.437290 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:44:39.443631 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:44:42.874022 ldconfig[1911]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:44:42.882336 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:44:42.888852 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:44:42.918918 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:44:42.924156 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:44:42.928614 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:44:42.933685 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:44:42.939602 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:44:42.944450 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:44:42.949484 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:44:42.954734 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:44:42.959595 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:44:42.965327 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:44:42.965358 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:44:42.969609 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:44:42.987978 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:44:42.994121 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:44:43.000151 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:44:43.006535 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:44:43.013369 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:44:43.021465 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:44:43.026476 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:44:43.032825 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:44:43.037915 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:44:43.042178 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:44:43.047042 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:44:43.047063 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:44:43.049325 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:44:43.060008 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:44:43.075571 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:44:43.083976 chronyd[1959]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:44:43.084429 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:44:43.092019 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:44:43.101323 chronyd[1959]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:44:43.101497 chronyd[1959]: Loaded seccomp filter (level 2) Dec 16 12:44:43.108077 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:44:43.113293 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:44:43.117748 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:44:43.119726 jq[1967]: false Dec 16 12:44:43.120191 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:44:43.126280 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:44:43.134697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:44:43.140901 KVP[1969]: KVP starting; pid is:1969 Dec 16 12:44:43.142553 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:44:43.148686 KVP[1969]: KVP LIC Version: 3.1 Dec 16 12:44:43.148912 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:44:43.149817 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:44:43.161089 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:44:43.168863 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:44:43.176098 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:44:43.186254 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:44:43.192813 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:44:43.193322 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:44:43.194109 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:44:43.198156 extend-filesystems[1968]: Found /dev/sda6 Dec 16 12:44:43.218045 extend-filesystems[1968]: Found /dev/sda9 Dec 16 12:44:43.204007 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:44:43.224159 extend-filesystems[1968]: Checking size of /dev/sda9 Dec 16 12:44:43.222248 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:44:43.233572 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:44:43.236971 jq[1990]: true Dec 16 12:44:43.243541 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:44:43.243759 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:44:43.247213 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:44:43.247430 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:44:43.252953 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:44:43.257076 extend-filesystems[1968]: Resized partition /dev/sda9 Dec 16 12:44:43.268634 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:44:43.268866 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:44:43.297888 extend-filesystems[2015]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:44:43.321619 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 12:44:43.321652 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Dec 16 12:44:43.321665 update_engine[1986]: I20251216 12:44:43.320580 1986 main.cc:92] Flatcar Update Engine starting Dec 16 12:44:43.330275 systemd-logind[1983]: New seat seat0. Dec 16 12:44:43.342010 jq[2017]: true Dec 16 12:44:43.344181 systemd-logind[1983]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:44:43.344509 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:44:43.356974 extend-filesystems[2015]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 12:44:43.356974 extend-filesystems[2015]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 12:44:43.356974 extend-filesystems[2015]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Dec 16 12:44:43.427848 tar[2012]: linux-arm64/LICENSE Dec 16 12:44:43.427848 tar[2012]: linux-arm64/helm Dec 16 12:44:43.364231 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:44:43.428458 extend-filesystems[1968]: Resized filesystem in /dev/sda9 Dec 16 12:44:43.364951 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:44:43.479505 dbus-daemon[1962]: [system] SELinux support is enabled Dec 16 12:44:43.489269 update_engine[1986]: I20251216 12:44:43.483939 1986 update_check_scheduler.cc:74] Next update check in 7m19s Dec 16 12:44:43.479839 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:44:43.493242 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:44:43.493423 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:44:43.494344 dbus-daemon[1962]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:44:43.505521 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:44:43.505553 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:44:43.517682 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:44:43.523825 bash[2066]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:44:43.527981 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:44:43.559418 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:44:43.574771 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:44:43.614990 coreos-metadata[1961]: Dec 16 12:44:43.614 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:44:43.619989 coreos-metadata[1961]: Dec 16 12:44:43.619 INFO Fetch successful Dec 16 12:44:43.619989 coreos-metadata[1961]: Dec 16 12:44:43.619 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:44:43.627166 coreos-metadata[1961]: Dec 16 12:44:43.626 INFO Fetch successful Dec 16 12:44:43.627378 coreos-metadata[1961]: Dec 16 12:44:43.627 INFO Fetching http://168.63.129.16/machine/8a9b4819-546a-4c6f-b904-8ee7a3ee69aa/0520a28a%2D5a97%2D4872%2D90ff%2D214eb4885745.%5Fci%2D4515.1.0%2Da%2D6d618b7fe6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:44:43.632995 coreos-metadata[1961]: Dec 16 12:44:43.632 INFO Fetch successful Dec 16 12:44:43.633297 coreos-metadata[1961]: Dec 16 12:44:43.633 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:44:43.646986 coreos-metadata[1961]: Dec 16 12:44:43.645 INFO Fetch successful Dec 16 12:44:43.703993 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:44:43.710411 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:44:43.758803 containerd[2018]: time="2025-12-16T12:44:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:44:43.761528 containerd[2018]: time="2025-12-16T12:44:43.761103884Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:44:43.779288 containerd[2018]: time="2025-12-16T12:44:43.779233964Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.184µs" Dec 16 12:44:43.781135 containerd[2018]: time="2025-12-16T12:44:43.781097444Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:44:43.781375 containerd[2018]: time="2025-12-16T12:44:43.781358036Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:44:43.781713 containerd[2018]: time="2025-12-16T12:44:43.781697124Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.781963420Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.781986124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782037924Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782045036Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782259924Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782269804Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782276876Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782289788Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782421580Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782431012Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782516036Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783622 containerd[2018]: time="2025-12-16T12:44:43.782670180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783828 containerd[2018]: time="2025-12-16T12:44:43.782689828Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:44:43.783828 containerd[2018]: time="2025-12-16T12:44:43.782695988Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:44:43.783828 containerd[2018]: time="2025-12-16T12:44:43.782725556Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:44:43.786613 containerd[2018]: time="2025-12-16T12:44:43.786584684Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:44:43.787248 containerd[2018]: time="2025-12-16T12:44:43.787228596Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:44:43.799462 containerd[2018]: time="2025-12-16T12:44:43.799411580Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:44:43.799799 containerd[2018]: time="2025-12-16T12:44:43.799779540Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:44:43.800423 containerd[2018]: time="2025-12-16T12:44:43.800400596Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:44:43.800506 containerd[2018]: time="2025-12-16T12:44:43.800491916Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:44:43.800653 containerd[2018]: time="2025-12-16T12:44:43.800546204Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:44:43.800717 containerd[2018]: time="2025-12-16T12:44:43.800705828Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:44:43.800795 containerd[2018]: time="2025-12-16T12:44:43.800784092Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:44:43.800837 containerd[2018]: time="2025-12-16T12:44:43.800827244Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:44:43.800895 containerd[2018]: time="2025-12-16T12:44:43.800884684Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:44:43.801011 containerd[2018]: time="2025-12-16T12:44:43.800998988Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:44:43.801096 containerd[2018]: time="2025-12-16T12:44:43.801084620Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:44:43.801268 containerd[2018]: time="2025-12-16T12:44:43.801254788Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:44:43.801322 containerd[2018]: time="2025-12-16T12:44:43.801304820Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:44:43.801369 containerd[2018]: time="2025-12-16T12:44:43.801359772Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:44:43.801613 containerd[2018]: time="2025-12-16T12:44:43.801593844Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:44:43.802013 containerd[2018]: time="2025-12-16T12:44:43.801995236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:44:43.802218 containerd[2018]: time="2025-12-16T12:44:43.802152460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:44:43.802310 containerd[2018]: time="2025-12-16T12:44:43.802297492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:44:43.802427 containerd[2018]: time="2025-12-16T12:44:43.802413300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:44:43.802566 containerd[2018]: time="2025-12-16T12:44:43.802552980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:44:43.802688 containerd[2018]: time="2025-12-16T12:44:43.802670612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:44:43.802753 containerd[2018]: time="2025-12-16T12:44:43.802743644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:44:43.802993 containerd[2018]: time="2025-12-16T12:44:43.802924900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:44:43.803090 containerd[2018]: time="2025-12-16T12:44:43.803077748Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:44:43.803242 containerd[2018]: time="2025-12-16T12:44:43.803228516Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:44:43.803423 containerd[2018]: time="2025-12-16T12:44:43.803373364Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:44:43.803749 containerd[2018]: time="2025-12-16T12:44:43.803683396Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:44:43.803915 containerd[2018]: time="2025-12-16T12:44:43.803849140Z" level=info msg="Start snapshots syncer" Dec 16 12:44:43.804094 containerd[2018]: time="2025-12-16T12:44:43.804001404Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:44:43.805135 containerd[2018]: time="2025-12-16T12:44:43.805018804Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:44:43.805405 containerd[2018]: time="2025-12-16T12:44:43.805348332Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:44:43.805897 containerd[2018]: time="2025-12-16T12:44:43.805572732Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:44:43.806380 containerd[2018]: time="2025-12-16T12:44:43.806328780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:44:43.806644 containerd[2018]: time="2025-12-16T12:44:43.806533932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:44:43.806707 containerd[2018]: time="2025-12-16T12:44:43.806694684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:44:43.806845 containerd[2018]: time="2025-12-16T12:44:43.806775076Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:44:43.806961 containerd[2018]: time="2025-12-16T12:44:43.806912220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:44:43.807114 containerd[2018]: time="2025-12-16T12:44:43.807071652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:44:43.807330 containerd[2018]: time="2025-12-16T12:44:43.807236172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:44:43.807330 containerd[2018]: time="2025-12-16T12:44:43.807253116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:44:43.807330 containerd[2018]: time="2025-12-16T12:44:43.807261460Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:44:43.807561 containerd[2018]: time="2025-12-16T12:44:43.807495220Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:44:43.807715 containerd[2018]: time="2025-12-16T12:44:43.807640004Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:44:43.807825 containerd[2018]: time="2025-12-16T12:44:43.807809068Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:44:43.808104 containerd[2018]: time="2025-12-16T12:44:43.808040660Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:44:43.808104 containerd[2018]: time="2025-12-16T12:44:43.808053364Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:44:43.808104 containerd[2018]: time="2025-12-16T12:44:43.808065620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:44:43.808104 containerd[2018]: time="2025-12-16T12:44:43.808074292Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:44:43.808104 containerd[2018]: time="2025-12-16T12:44:43.808083964Z" level=info msg="runtime interface created" Dec 16 12:44:43.808104 containerd[2018]: time="2025-12-16T12:44:43.808087852Z" level=info msg="created NRI interface" Dec 16 12:44:43.808435 containerd[2018]: time="2025-12-16T12:44:43.808093148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:44:43.808435 containerd[2018]: time="2025-12-16T12:44:43.808232828Z" level=info msg="Connect containerd service" Dec 16 12:44:43.808435 containerd[2018]: time="2025-12-16T12:44:43.808260756Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:44:43.810822 containerd[2018]: time="2025-12-16T12:44:43.810566292Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:44:43.948951 tar[2012]: linux-arm64/README.md Dec 16 12:44:43.970423 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:44:43.983459 locksmithd[2108]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:44:44.202920 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:44:44.222926 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:44:44.231352 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:44:44.239084 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:44:44.256274 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:44:44.258053 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:44:44.266126 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:44:44.271701 containerd[2018]: time="2025-12-16T12:44:44.271622652Z" level=info msg="Start subscribing containerd event" Dec 16 12:44:44.271701 containerd[2018]: time="2025-12-16T12:44:44.271702708Z" level=info msg="Start recovering state" Dec 16 12:44:44.271840 containerd[2018]: time="2025-12-16T12:44:44.271797404Z" level=info msg="Start event monitor" Dec 16 12:44:44.271840 containerd[2018]: time="2025-12-16T12:44:44.271810084Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:44:44.271840 containerd[2018]: time="2025-12-16T12:44:44.271815332Z" level=info msg="Start streaming server" Dec 16 12:44:44.271840 containerd[2018]: time="2025-12-16T12:44:44.271826500Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:44:44.271840 containerd[2018]: time="2025-12-16T12:44:44.271832588Z" level=info msg="runtime interface starting up..." Dec 16 12:44:44.271840 containerd[2018]: time="2025-12-16T12:44:44.271836484Z" level=info msg="starting plugins..." Dec 16 12:44:44.271926 containerd[2018]: time="2025-12-16T12:44:44.271846508Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:44:44.272156 containerd[2018]: time="2025-12-16T12:44:44.272125364Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:44:44.272283 containerd[2018]: time="2025-12-16T12:44:44.272269572Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:44:44.272397 containerd[2018]: time="2025-12-16T12:44:44.272385220Z" level=info msg="containerd successfully booted in 0.514583s" Dec 16 12:44:44.273783 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:44:44.283154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:44:44.289545 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:44:44.292648 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:44:44.318023 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:44:44.328321 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:44:44.334999 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:44:44.344625 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:44:44.349432 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:44:44.356492 systemd[1]: Startup finished in 2.723s (kernel) + 11.855s (initrd) + 12.261s (userspace) = 26.840s. Dec 16 12:44:44.681900 kubelet[2170]: E1216 12:44:44.681759 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:44:44.685571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:44:44.685698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:44:44.686150 systemd[1]: kubelet.service: Consumed 568ms CPU time, 257.4M memory peak. Dec 16 12:44:45.060478 login[2176]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:44:45.061754 login[2177]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:44:45.072472 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:44:45.076144 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:44:45.078630 systemd-logind[1983]: New session 1 of user core. Dec 16 12:44:45.082326 systemd-logind[1983]: New session 2 of user core. Dec 16 12:44:45.095053 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:44:45.097368 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:44:45.111162 (systemd)[2189]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:44:45.113812 systemd-logind[1983]: New session c1 of user core. Dec 16 12:44:45.267940 systemd[2189]: Queued start job for default target default.target. Dec 16 12:44:45.277964 systemd[2189]: Created slice app.slice - User Application Slice. Dec 16 12:44:45.278001 systemd[2189]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:44:45.278012 systemd[2189]: Reached target paths.target - Paths. Dec 16 12:44:45.278069 systemd[2189]: Reached target timers.target - Timers. Dec 16 12:44:45.279394 systemd[2189]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:44:45.280001 systemd[2189]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:44:45.291922 systemd[2189]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:44:45.292070 systemd[2189]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:44:45.292166 systemd[2189]: Reached target sockets.target - Sockets. Dec 16 12:44:45.292211 systemd[2189]: Reached target basic.target - Basic System. Dec 16 12:44:45.292234 systemd[2189]: Reached target default.target - Main User Target. Dec 16 12:44:45.292255 systemd[2189]: Startup finished in 172ms. Dec 16 12:44:45.292414 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:44:45.296059 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:44:45.296719 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:44:45.857379 waagent[2169]: 2025-12-16T12:44:45.857291Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:44:45.862903 waagent[2169]: 2025-12-16T12:44:45.862756Z INFO Daemon Daemon OS: flatcar 4515.1.0 Dec 16 12:44:45.866827 waagent[2169]: 2025-12-16T12:44:45.866765Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:44:45.870735 waagent[2169]: 2025-12-16T12:44:45.870678Z INFO Daemon Daemon Run daemon Dec 16 12:44:45.874465 waagent[2169]: 2025-12-16T12:44:45.874413Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Dec 16 12:44:45.881917 waagent[2169]: 2025-12-16T12:44:45.881854Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:44:45.886595 waagent[2169]: 2025-12-16T12:44:45.886550Z INFO Daemon Daemon Activate resource disk Dec 16 12:44:45.890624 waagent[2169]: 2025-12-16T12:44:45.890577Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:44:45.899849 waagent[2169]: 2025-12-16T12:44:45.899796Z INFO Daemon Daemon Found device: None Dec 16 12:44:45.903800 waagent[2169]: 2025-12-16T12:44:45.903751Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:44:45.911302 waagent[2169]: 2025-12-16T12:44:45.911262Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:44:45.921230 waagent[2169]: 2025-12-16T12:44:45.921184Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:44:45.926478 waagent[2169]: 2025-12-16T12:44:45.926435Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:44:45.936807 waagent[2169]: 2025-12-16T12:44:45.936730Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:44:45.949456 waagent[2169]: 2025-12-16T12:44:45.949403Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:44:45.958707 waagent[2169]: 2025-12-16T12:44:45.958656Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:44:45.963196 waagent[2169]: 2025-12-16T12:44:45.963152Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:44:46.016392 waagent[2169]: 2025-12-16T12:44:46.015892Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:44:46.041312 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:44:46.045901 waagent[2169]: 2025-12-16T12:44:46.043842Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:44:46.048339 waagent[2169]: 2025-12-16T12:44:46.048275Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:44:46.053153 waagent[2169]: 2025-12-16T12:44:46.053103Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:44:46.058648 waagent[2169]: 2025-12-16T12:44:46.058603Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:44:46.063845 waagent[2169]: 2025-12-16T12:44:46.063798Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:44:46.068293 waagent[2169]: 2025-12-16T12:44:46.068245Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:44:46.101397 waagent[2169]: 2025-12-16T12:44:46.095676Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:44:46.101947 waagent[2169]: 2025-12-16T12:44:46.101921Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:44:46.106781 waagent[2169]: 2025-12-16T12:44:46.106720Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:44:46.240488 waagent[2169]: 2025-12-16T12:44:46.240344Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:44:46.246668 waagent[2169]: 2025-12-16T12:44:46.246592Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:44:46.256524 waagent[2169]: 2025-12-16T12:44:46.256471Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:44:46.278097 waagent[2169]: 2025-12-16T12:44:46.278046Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:44:46.283842 waagent[2169]: 2025-12-16T12:44:46.283797Z INFO Daemon Dec 16 12:44:46.286279 waagent[2169]: 2025-12-16T12:44:46.286223Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 98d2be3c-53bd-4ed7-b58b-de1a5fa697a3 eTag: 8954602786571672925 source: Fabric] Dec 16 12:44:46.296410 waagent[2169]: 2025-12-16T12:44:46.296359Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:44:46.302850 waagent[2169]: 2025-12-16T12:44:46.302807Z INFO Daemon Dec 16 12:44:46.305581 waagent[2169]: 2025-12-16T12:44:46.305523Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:44:46.317370 waagent[2169]: 2025-12-16T12:44:46.317326Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:44:46.383733 waagent[2169]: 2025-12-16T12:44:46.383644Z INFO Daemon Downloaded certificate {'thumbprint': 'EC5D49A70CC181937B314125535B747E5639284C', 'hasPrivateKey': True} Dec 16 12:44:46.393274 waagent[2169]: 2025-12-16T12:44:46.393221Z INFO Daemon Fetch goal state completed Dec 16 12:44:46.404928 waagent[2169]: 2025-12-16T12:44:46.404890Z INFO Daemon Daemon Starting provisioning Dec 16 12:44:46.410303 waagent[2169]: 2025-12-16T12:44:46.410246Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:44:46.414799 waagent[2169]: 2025-12-16T12:44:46.414754Z INFO Daemon Daemon Set hostname [ci-4515.1.0-a-6d618b7fe6] Dec 16 12:44:46.435912 waagent[2169]: 2025-12-16T12:44:46.435824Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-a-6d618b7fe6] Dec 16 12:44:46.441317 waagent[2169]: 2025-12-16T12:44:46.441258Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:44:46.446328 waagent[2169]: 2025-12-16T12:44:46.446278Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:44:46.457281 systemd-networkd[1715]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:46.457289 systemd-networkd[1715]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:46.457385 systemd-networkd[1715]: eth0: DHCP lease lost Dec 16 12:44:46.475647 waagent[2169]: 2025-12-16T12:44:46.475452Z INFO Daemon Daemon Create user account if not exists Dec 16 12:44:46.480009 waagent[2169]: 2025-12-16T12:44:46.479941Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:44:46.484455 waagent[2169]: 2025-12-16T12:44:46.484403Z INFO Daemon Daemon Configure sudoer Dec 16 12:44:46.485964 systemd-networkd[1715]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:44:46.492528 waagent[2169]: 2025-12-16T12:44:46.492397Z INFO Daemon Daemon Configure sshd Dec 16 12:44:46.498610 waagent[2169]: 2025-12-16T12:44:46.498542Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:44:46.509222 waagent[2169]: 2025-12-16T12:44:46.509173Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:44:47.587725 waagent[2169]: 2025-12-16T12:44:47.587669Z INFO Daemon Daemon Provisioning complete Dec 16 12:44:47.605247 waagent[2169]: 2025-12-16T12:44:47.605199Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:44:47.610552 waagent[2169]: 2025-12-16T12:44:47.610502Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:44:47.618381 waagent[2169]: 2025-12-16T12:44:47.618334Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:44:47.727928 waagent[2241]: 2025-12-16T12:44:47.727082Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:44:47.727928 waagent[2241]: 2025-12-16T12:44:47.727229Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Dec 16 12:44:47.727928 waagent[2241]: 2025-12-16T12:44:47.727271Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:44:47.727928 waagent[2241]: 2025-12-16T12:44:47.727308Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:44:47.759940 waagent[2241]: 2025-12-16T12:44:47.759843Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:44:47.760117 waagent[2241]: 2025-12-16T12:44:47.760086Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:44:47.760160 waagent[2241]: 2025-12-16T12:44:47.760142Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:44:47.766800 waagent[2241]: 2025-12-16T12:44:47.766729Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:44:47.772813 waagent[2241]: 2025-12-16T12:44:47.772772Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:44:47.773332 waagent[2241]: 2025-12-16T12:44:47.773299Z INFO ExtHandler Dec 16 12:44:47.773390 waagent[2241]: 2025-12-16T12:44:47.773371Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b1e62638-de7f-4d1e-b2b9-68f8d5abfb69 eTag: 8954602786571672925 source: Fabric] Dec 16 12:44:47.773638 waagent[2241]: 2025-12-16T12:44:47.773611Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:44:47.774143 waagent[2241]: 2025-12-16T12:44:47.774111Z INFO ExtHandler Dec 16 12:44:47.774185 waagent[2241]: 2025-12-16T12:44:47.774168Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:44:47.778333 waagent[2241]: 2025-12-16T12:44:47.778299Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:44:47.840609 waagent[2241]: 2025-12-16T12:44:47.840459Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EC5D49A70CC181937B314125535B747E5639284C', 'hasPrivateKey': True} Dec 16 12:44:47.841042 waagent[2241]: 2025-12-16T12:44:47.841004Z INFO ExtHandler Fetch goal state completed Dec 16 12:44:47.856969 waagent[2241]: 2025-12-16T12:44:47.856903Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Dec 16 12:44:47.860839 waagent[2241]: 2025-12-16T12:44:47.860773Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2241 Dec 16 12:44:47.861005 waagent[2241]: 2025-12-16T12:44:47.860972Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:44:47.861316 waagent[2241]: 2025-12-16T12:44:47.861286Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:44:47.862506 waagent[2241]: 2025-12-16T12:44:47.862466Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:44:47.862843 waagent[2241]: 2025-12-16T12:44:47.862811Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:44:47.863015 waagent[2241]: 2025-12-16T12:44:47.862987Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:44:47.863456 waagent[2241]: 2025-12-16T12:44:47.863424Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:44:47.933035 waagent[2241]: 2025-12-16T12:44:47.932992Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:44:47.933231 waagent[2241]: 2025-12-16T12:44:47.933201Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:44:47.938820 waagent[2241]: 2025-12-16T12:44:47.938246Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:44:47.944054 systemd[1]: Reload requested from client PID 2256 ('systemctl') (unit waagent.service)... Dec 16 12:44:47.944069 systemd[1]: Reloading... Dec 16 12:44:48.026937 zram_generator::config[2298]: No configuration found. Dec 16 12:44:48.183084 systemd[1]: Reloading finished in 238 ms. Dec 16 12:44:48.209037 waagent[2241]: 2025-12-16T12:44:48.208274Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:44:48.209037 waagent[2241]: 2025-12-16T12:44:48.208432Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:44:49.129922 waagent[2241]: 2025-12-16T12:44:49.129058Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:44:49.129922 waagent[2241]: 2025-12-16T12:44:49.129404Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:44:49.130283 waagent[2241]: 2025-12-16T12:44:49.130111Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:44:49.130283 waagent[2241]: 2025-12-16T12:44:49.130184Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:44:49.130376 waagent[2241]: 2025-12-16T12:44:49.130340Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:44:49.130468 waagent[2241]: 2025-12-16T12:44:49.130424Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:44:49.130575 waagent[2241]: 2025-12-16T12:44:49.130546Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:44:49.130575 waagent[2241]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:44:49.130575 waagent[2241]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:44:49.130575 waagent[2241]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:44:49.130575 waagent[2241]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:44:49.130575 waagent[2241]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:44:49.130575 waagent[2241]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:44:49.131098 waagent[2241]: 2025-12-16T12:44:49.131065Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:44:49.131257 waagent[2241]: 2025-12-16T12:44:49.131232Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:44:49.131529 waagent[2241]: 2025-12-16T12:44:49.131500Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:44:49.131620 waagent[2241]: 2025-12-16T12:44:49.131573Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:44:49.131707 waagent[2241]: 2025-12-16T12:44:49.131667Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:44:49.131949 waagent[2241]: 2025-12-16T12:44:49.131911Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:44:49.132263 waagent[2241]: 2025-12-16T12:44:49.132225Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:44:49.132326 waagent[2241]: 2025-12-16T12:44:49.132311Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:44:49.132444 waagent[2241]: 2025-12-16T12:44:49.132400Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:44:49.132584 waagent[2241]: 2025-12-16T12:44:49.132553Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:44:49.133082 waagent[2241]: 2025-12-16T12:44:49.132943Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:44:49.140496 waagent[2241]: 2025-12-16T12:44:49.140438Z INFO ExtHandler ExtHandler Dec 16 12:44:49.140719 waagent[2241]: 2025-12-16T12:44:49.140687Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f86c5e8d-fb50-4440-9aab-32e0a5f8fbba correlation 1f9bcaf1-1010-40c2-a9c4-beca24d6f8c6 created: 2025-12-16T12:43:54.021744Z] Dec 16 12:44:49.141186 waagent[2241]: 2025-12-16T12:44:49.141142Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:44:49.141756 waagent[2241]: 2025-12-16T12:44:49.141708Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 16 12:44:49.170042 waagent[2241]: 2025-12-16T12:44:49.169983Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:44:49.170042 waagent[2241]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:44:49.170660 waagent[2241]: 2025-12-16T12:44:49.170622Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 50BF0EAE-64CF-4746-ADFB-A2C6E302CA35;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:44:49.179823 waagent[2241]: 2025-12-16T12:44:49.179758Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:44:49.179823 waagent[2241]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:44:49.179823 waagent[2241]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:44:49.179823 waagent[2241]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:77:b8 brd ff:ff:ff:ff:ff:ff\ altname enx002248b877b8 Dec 16 12:44:49.179823 waagent[2241]: 3: enP12603s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:77:b8 brd ff:ff:ff:ff:ff:ff\ altname enP12603p0s2 Dec 16 12:44:49.179823 waagent[2241]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:44:49.179823 waagent[2241]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:44:49.179823 waagent[2241]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:44:49.179823 waagent[2241]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:44:49.179823 waagent[2241]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:44:49.179823 waagent[2241]: 2: eth0 inet6 fe80::222:48ff:feb8:77b8/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:44:49.249482 waagent[2241]: 2025-12-16T12:44:49.249398Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:44:49.249482 waagent[2241]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:49.249482 waagent[2241]: pkts bytes target prot opt in out source destination Dec 16 12:44:49.249482 waagent[2241]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:49.249482 waagent[2241]: pkts bytes target prot opt in out source destination Dec 16 12:44:49.249482 waagent[2241]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:49.249482 waagent[2241]: pkts bytes target prot opt in out source destination Dec 16 12:44:49.249482 waagent[2241]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:44:49.249482 waagent[2241]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:44:49.249482 waagent[2241]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:44:49.252254 waagent[2241]: 2025-12-16T12:44:49.252192Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:44:49.252254 waagent[2241]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:49.252254 waagent[2241]: pkts bytes target prot opt in out source destination Dec 16 12:44:49.252254 waagent[2241]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:49.252254 waagent[2241]: pkts bytes target prot opt in out source destination Dec 16 12:44:49.252254 waagent[2241]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:44:49.252254 waagent[2241]: pkts bytes target prot opt in out source destination Dec 16 12:44:49.252254 waagent[2241]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:44:49.252254 waagent[2241]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:44:49.252254 waagent[2241]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:44:49.252471 waagent[2241]: 2025-12-16T12:44:49.252455Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:44:54.936487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:44:54.937952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:44:55.059606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:44:55.066166 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:44:55.161545 kubelet[2394]: E1216 12:44:55.161483 2394 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:44:55.164640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:44:55.164760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:44:55.165414 systemd[1]: kubelet.service: Consumed 123ms CPU time, 106M memory peak. Dec 16 12:45:05.250447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:45:05.252010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:05.353338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:05.361155 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:05.467751 kubelet[2409]: E1216 12:45:05.467673 2409 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:05.470270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:05.470512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:05.471124 systemd[1]: kubelet.service: Consumed 117ms CPU time, 107.1M memory peak. Dec 16 12:45:06.903579 chronyd[1959]: Selected source PHC0 Dec 16 12:45:10.415718 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:45:10.417442 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:35342.service - OpenSSH per-connection server daemon (10.200.16.10:35342). Dec 16 12:45:10.945879 sshd[2417]: Accepted publickey for core from 10.200.16.10 port 35342 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:10.947041 sshd-session[2417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:10.951207 systemd-logind[1983]: New session 3 of user core. Dec 16 12:45:10.957063 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:45:11.242414 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:35344.service - OpenSSH per-connection server daemon (10.200.16.10:35344). Dec 16 12:45:11.635427 sshd[2423]: Accepted publickey for core from 10.200.16.10 port 35344 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:11.637086 sshd-session[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:11.641173 systemd-logind[1983]: New session 4 of user core. Dec 16 12:45:11.648099 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:45:11.943255 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:35346.service - OpenSSH per-connection server daemon (10.200.16.10:35346). Dec 16 12:45:12.202604 sshd[2426]: Connection closed by 10.200.16.10 port 35344 Dec 16 12:45:12.202419 sshd-session[2423]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:12.206365 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:35344.service: Deactivated successfully. Dec 16 12:45:12.209114 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:45:12.211828 systemd-logind[1983]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:45:12.213380 systemd-logind[1983]: Removed session 4. Dec 16 12:45:12.364751 sshd[2429]: Accepted publickey for core from 10.200.16.10 port 35346 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:12.366349 sshd-session[2429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:12.370926 systemd-logind[1983]: New session 5 of user core. Dec 16 12:45:12.381070 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:45:12.595769 sshd[2435]: Connection closed by 10.200.16.10 port 35346 Dec 16 12:45:12.595602 sshd-session[2429]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:12.599465 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:35346.service: Deactivated successfully. Dec 16 12:45:12.601325 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:45:12.602154 systemd-logind[1983]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:45:12.603689 systemd-logind[1983]: Removed session 5. Dec 16 12:45:12.687060 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:35360.service - OpenSSH per-connection server daemon (10.200.16.10:35360). Dec 16 12:45:13.147735 sshd[2441]: Accepted publickey for core from 10.200.16.10 port 35360 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:13.148846 sshd-session[2441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:13.152589 systemd-logind[1983]: New session 6 of user core. Dec 16 12:45:13.161078 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:45:13.382186 sshd[2444]: Connection closed by 10.200.16.10 port 35360 Dec 16 12:45:13.381461 sshd-session[2441]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:13.385166 systemd-logind[1983]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:45:13.385821 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:35360.service: Deactivated successfully. Dec 16 12:45:13.387782 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:45:13.389391 systemd-logind[1983]: Removed session 6. Dec 16 12:45:13.469040 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:35370.service - OpenSSH per-connection server daemon (10.200.16.10:35370). Dec 16 12:45:13.893696 sshd[2450]: Accepted publickey for core from 10.200.16.10 port 35370 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:13.895648 sshd-session[2450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:13.899929 systemd-logind[1983]: New session 7 of user core. Dec 16 12:45:13.911318 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:45:14.148519 sudo[2454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:45:14.148957 sudo[2454]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:14.175360 sudo[2454]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:14.252552 sshd[2453]: Connection closed by 10.200.16.10 port 35370 Dec 16 12:45:14.252951 sshd-session[2450]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:14.256980 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:35370.service: Deactivated successfully. Dec 16 12:45:14.258515 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:45:14.259277 systemd-logind[1983]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:45:14.260340 systemd-logind[1983]: Removed session 7. Dec 16 12:45:14.343972 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:35372.service - OpenSSH per-connection server daemon (10.200.16.10:35372). Dec 16 12:45:14.768860 sshd[2460]: Accepted publickey for core from 10.200.16.10 port 35372 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:14.770124 sshd-session[2460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:14.774363 systemd-logind[1983]: New session 8 of user core. Dec 16 12:45:14.780067 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:45:14.926820 sudo[2465]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:45:14.927417 sudo[2465]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:14.942643 sudo[2465]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:14.947816 sudo[2464]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:45:14.948055 sudo[2464]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:14.956167 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:45:14.983000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:45:14.987928 kernel: kauditd_printk_skb: 140 callbacks suppressed Dec 16 12:45:14.987999 kernel: audit: type=1305 audit(1765889114.983:241): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:45:14.988227 augenrules[2487]: No rules Dec 16 12:45:14.983000 audit[2487]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2c90260 a2=420 a3=0 items=0 ppid=2468 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:14.997418 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:45:14.999925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:45:15.012122 kernel: audit: type=1300 audit(1765889114.983:241): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2c90260 a2=420 a3=0 items=0 ppid=2468 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:15.012384 sudo[2464]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:14.983000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:45:15.020175 kernel: audit: type=1327 audit(1765889114.983:241): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:45:15.020239 kernel: audit: type=1130 audit(1765889114.995:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:14.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.044593 kernel: audit: type=1131 audit(1765889114.995:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.044717 kernel: audit: type=1106 audit(1765889115.011:244): pid=2464 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.011000 audit[2464]: USER_END pid=2464 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.011000 audit[2464]: CRED_DISP pid=2464 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.074183 kernel: audit: type=1104 audit(1765889115.011:245): pid=2464 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.38:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.185790 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:35376.service - OpenSSH per-connection server daemon (10.200.16.10:35376). Dec 16 12:45:15.200213 kernel: audit: type=1130 audit(1765889115.185:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.38:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.378733 sshd[2463]: Connection closed by 10.200.16.10 port 35372 Dec 16 12:45:15.379333 sshd-session[2460]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:15.379000 audit[2460]: USER_END pid=2460 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.382764 systemd-logind[1983]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:45:15.384968 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:35372.service: Deactivated successfully. Dec 16 12:45:15.388607 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:45:15.391352 systemd-logind[1983]: Removed session 8. Dec 16 12:45:15.380000 audit[2460]: CRED_DISP pid=2460 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.412310 kernel: audit: type=1106 audit(1765889115.379:247): pid=2460 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.412376 kernel: audit: type=1104 audit(1765889115.380:248): pid=2460 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.38:22-10.200.16.10:35372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.499965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:45:15.503093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:15.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.613445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:15.616815 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:15.645861 kubelet[2507]: E1216 12:45:15.645717 2507 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:15.645000 audit[2493]: USER_ACCT pid=2493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.647387 sshd[2493]: Accepted publickey for core from 10.200.16.10 port 35376 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:15.646000 audit[2493]: CRED_ACQ pid=2493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.646000 audit[2493]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd0ba84c0 a2=3 a3=0 items=0 ppid=1 pid=2493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:15.646000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:45:15.648149 sshd-session[2493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:15.649720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:15.649828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:15.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:15.651235 systemd[1]: kubelet.service: Consumed 113ms CPU time, 106.9M memory peak. Dec 16 12:45:15.654409 systemd-logind[1983]: New session 9 of user core. Dec 16 12:45:15.662078 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:45:15.663000 audit[2493]: USER_START pid=2493 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.665000 audit[2514]: CRED_ACQ pid=2514 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:15.814000 audit[2515]: USER_ACCT pid=2515 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.815609 sudo[2515]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:45:15.814000 audit[2515]: CRED_REFR pid=2515 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:15.815940 sudo[2515]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:15.816000 audit[2515]: USER_START pid=2515 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:17.016775 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:45:17.026199 (dockerd)[2533]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:45:17.819210 dockerd[2533]: time="2025-12-16T12:45:17.819149860Z" level=info msg="Starting up" Dec 16 12:45:17.821523 dockerd[2533]: time="2025-12-16T12:45:17.821488700Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:45:17.830804 dockerd[2533]: time="2025-12-16T12:45:17.830705652Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:45:17.913296 systemd[1]: var-lib-docker-metacopy\x2dcheck3907893245-merged.mount: Deactivated successfully. Dec 16 12:45:17.925948 dockerd[2533]: time="2025-12-16T12:45:17.925902956Z" level=info msg="Loading containers: start." Dec 16 12:45:17.951901 kernel: Initializing XFRM netlink socket Dec 16 12:45:17.988000 audit[2580]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:17.988000 audit[2580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe050f3d0 a2=0 a3=0 items=0 ppid=2533 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:17.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:45:17.989000 audit[2582]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:17.989000 audit[2582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe0aca0c0 a2=0 a3=0 items=0 ppid=2533 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:17.989000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:45:17.991000 audit[2584]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:17.991000 audit[2584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffbc7e940 a2=0 a3=0 items=0 ppid=2533 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:17.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:45:17.993000 audit[2586]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:17.993000 audit[2586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3596950 a2=0 a3=0 items=0 ppid=2533 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:17.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:45:17.995000 audit[2588]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:17.995000 audit[2588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdac9bf60 a2=0 a3=0 items=0 ppid=2533 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:17.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:45:17.996000 audit[2590]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:17.996000 audit[2590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff30c2e40 a2=0 a3=0 items=0 ppid=2533 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:17.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:17.998000 audit[2592]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:17.998000 audit[2592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeec72d80 a2=0 a3=0 items=0 ppid=2533 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:17.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:45:18.000000 audit[2594]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.000000 audit[2594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc4c7d740 a2=0 a3=0 items=0 ppid=2533 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:45:18.029000 audit[2597]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.029000 audit[2597]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd6156bd0 a2=0 a3=0 items=0 ppid=2533 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.029000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:45:18.030000 audit[2599]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.030000 audit[2599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe6587cb0 a2=0 a3=0 items=0 ppid=2533 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.030000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:45:18.032000 audit[2601]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.032000 audit[2601]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff448ccf0 a2=0 a3=0 items=0 ppid=2533 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.032000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:45:18.034000 audit[2603]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.034000 audit[2603]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd68e80a0 a2=0 a3=0 items=0 ppid=2533 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.034000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:18.036000 audit[2605]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.036000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffb18cda0 a2=0 a3=0 items=0 ppid=2533 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:45:18.094000 audit[2635]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.094000 audit[2635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffeabca220 a2=0 a3=0 items=0 ppid=2533 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:45:18.096000 audit[2637]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.096000 audit[2637]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcdac1e40 a2=0 a3=0 items=0 ppid=2533 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.096000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:45:18.098000 audit[2639]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.098000 audit[2639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2b56070 a2=0 a3=0 items=0 ppid=2533 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:45:18.099000 audit[2641]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2641 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.099000 audit[2641]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb48ee20 a2=0 a3=0 items=0 ppid=2533 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:45:18.101000 audit[2643]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2643 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.101000 audit[2643]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1887800 a2=0 a3=0 items=0 ppid=2533 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.101000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:45:18.102000 audit[2645]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.102000 audit[2645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd4e37fb0 a2=0 a3=0 items=0 ppid=2533 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:18.104000 audit[2647]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2647 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.104000 audit[2647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffd315960 a2=0 a3=0 items=0 ppid=2533 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.104000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:45:18.106000 audit[2649]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.106000 audit[2649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff9ff07d0 a2=0 a3=0 items=0 ppid=2533 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.106000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:45:18.108000 audit[2651]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2651 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.108000 audit[2651]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffe9c48490 a2=0 a3=0 items=0 ppid=2533 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:45:18.109000 audit[2653]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2653 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.109000 audit[2653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe0831db0 a2=0 a3=0 items=0 ppid=2533 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:45:18.111000 audit[2655]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2655 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.111000 audit[2655]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffedb910e0 a2=0 a3=0 items=0 ppid=2533 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.111000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:45:18.113000 audit[2657]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.113000 audit[2657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff8060e30 a2=0 a3=0 items=0 ppid=2533 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.113000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:18.115000 audit[2659]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.115000 audit[2659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe63f0c10 a2=0 a3=0 items=0 ppid=2533 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.115000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:45:18.120000 audit[2664]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2664 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.120000 audit[2664]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd425fe80 a2=0 a3=0 items=0 ppid=2533 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.120000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:45:18.122000 audit[2666]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.122000 audit[2666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc75fc1c0 a2=0 a3=0 items=0 ppid=2533 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.122000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:45:18.124000 audit[2668]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.124000 audit[2668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffce035a10 a2=0 a3=0 items=0 ppid=2533 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:45:18.126000 audit[2670]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2670 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.126000 audit[2670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc3fc5ec0 a2=0 a3=0 items=0 ppid=2533 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:45:18.127000 audit[2672]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.127000 audit[2672]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff7759f90 a2=0 a3=0 items=0 ppid=2533 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.127000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:45:18.129000 audit[2674]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:18.129000 audit[2674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff2a3d510 a2=0 a3=0 items=0 ppid=2533 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.129000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:45:18.184000 audit[2679]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.184000 audit[2679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffff6755ca0 a2=0 a3=0 items=0 ppid=2533 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:45:18.185000 audit[2681]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.185000 audit[2681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc02452a0 a2=0 a3=0 items=0 ppid=2533 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:45:18.192000 audit[2689]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.192000 audit[2689]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffffbad3d70 a2=0 a3=0 items=0 ppid=2533 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.192000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:45:18.197000 audit[2694]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.197000 audit[2694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff0dd7320 a2=0 a3=0 items=0 ppid=2533 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:45:18.199000 audit[2696]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.199000 audit[2696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffe6f8f340 a2=0 a3=0 items=0 ppid=2533 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:45:18.201000 audit[2698]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2698 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.201000 audit[2698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd55b2550 a2=0 a3=0 items=0 ppid=2533 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.201000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:45:18.203000 audit[2700]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.203000 audit[2700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe5709110 a2=0 a3=0 items=0 ppid=2533 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.203000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:45:18.204000 audit[2702]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:18.204000 audit[2702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff933d990 a2=0 a3=0 items=0 ppid=2533 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:18.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:45:18.206700 systemd-networkd[1715]: docker0: Link UP Dec 16 12:45:18.218277 dockerd[2533]: time="2025-12-16T12:45:18.218227340Z" level=info msg="Loading containers: done." Dec 16 12:45:18.230674 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1559304354-merged.mount: Deactivated successfully. Dec 16 12:45:18.259251 dockerd[2533]: time="2025-12-16T12:45:18.259188516Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:45:18.259441 dockerd[2533]: time="2025-12-16T12:45:18.259298108Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:45:18.259441 dockerd[2533]: time="2025-12-16T12:45:18.259410340Z" level=info msg="Initializing buildkit" Dec 16 12:45:18.296711 dockerd[2533]: time="2025-12-16T12:45:18.296663084Z" level=info msg="Completed buildkit initialization" Dec 16 12:45:18.303009 dockerd[2533]: time="2025-12-16T12:45:18.302955388Z" level=info msg="Daemon has completed initialization" Dec 16 12:45:18.303612 dockerd[2533]: time="2025-12-16T12:45:18.303106380Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:45:18.303284 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:45:18.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:19.331702 containerd[2018]: time="2025-12-16T12:45:19.331644308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:45:20.276836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873766276.mount: Deactivated successfully. Dec 16 12:45:21.037791 containerd[2018]: time="2025-12-16T12:45:21.037739052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:21.040684 containerd[2018]: time="2025-12-16T12:45:21.040627852Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Dec 16 12:45:21.043162 containerd[2018]: time="2025-12-16T12:45:21.043112284Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:21.048097 containerd[2018]: time="2025-12-16T12:45:21.048033028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:21.048579 containerd[2018]: time="2025-12-16T12:45:21.048377956Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.716681992s" Dec 16 12:45:21.048579 containerd[2018]: time="2025-12-16T12:45:21.048411980Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 12:45:21.049942 containerd[2018]: time="2025-12-16T12:45:21.049917412Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:45:22.140767 containerd[2018]: time="2025-12-16T12:45:22.140352396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:22.142811 containerd[2018]: time="2025-12-16T12:45:22.142761772Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 16 12:45:22.145380 containerd[2018]: time="2025-12-16T12:45:22.145334500Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:22.148711 containerd[2018]: time="2025-12-16T12:45:22.148646340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:22.149441 containerd[2018]: time="2025-12-16T12:45:22.149292420Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.099348224s" Dec 16 12:45:22.149441 containerd[2018]: time="2025-12-16T12:45:22.149326812Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 12:45:22.149864 containerd[2018]: time="2025-12-16T12:45:22.149839292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:45:23.157605 containerd[2018]: time="2025-12-16T12:45:23.156953109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:23.160561 containerd[2018]: time="2025-12-16T12:45:23.160510226Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 16 12:45:23.165073 containerd[2018]: time="2025-12-16T12:45:23.165043471Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:23.169012 containerd[2018]: time="2025-12-16T12:45:23.168969125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:23.169505 containerd[2018]: time="2025-12-16T12:45:23.169476637Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.019524992s" Dec 16 12:45:23.169832 containerd[2018]: time="2025-12-16T12:45:23.169805768Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 12:45:23.170498 containerd[2018]: time="2025-12-16T12:45:23.170271273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:45:24.116565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910318843.mount: Deactivated successfully. Dec 16 12:45:24.407308 containerd[2018]: time="2025-12-16T12:45:24.407167715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:24.410203 containerd[2018]: time="2025-12-16T12:45:24.410025058Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28254952" Dec 16 12:45:24.413207 containerd[2018]: time="2025-12-16T12:45:24.413162613Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:24.418491 containerd[2018]: time="2025-12-16T12:45:24.418436262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:24.419540 containerd[2018]: time="2025-12-16T12:45:24.419474096Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.249175987s" Dec 16 12:45:24.419540 containerd[2018]: time="2025-12-16T12:45:24.419504285Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 12:45:24.420619 containerd[2018]: time="2025-12-16T12:45:24.420589103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:45:25.513071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1160939562.mount: Deactivated successfully. Dec 16 12:45:25.749648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:45:25.751941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:25.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:25.872483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:25.876666 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 12:45:25.876790 kernel: audit: type=1130 audit(1765889125.871:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:25.896180 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:26.298282 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:45:26.298355 kernel: audit: type=1131 audit(1765889125.978:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:25.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:26.298503 kubelet[2839]: E1216 12:45:25.977770 2839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:25.979642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:25.979742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:25.980078 systemd[1]: kubelet.service: Consumed 118ms CPU time, 107.2M memory peak. Dec 16 12:45:26.777866 containerd[2018]: time="2025-12-16T12:45:26.777203493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:26.782038 containerd[2018]: time="2025-12-16T12:45:26.781968234Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=0" Dec 16 12:45:26.785013 containerd[2018]: time="2025-12-16T12:45:26.784978805Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:26.789049 containerd[2018]: time="2025-12-16T12:45:26.788979665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:26.789784 containerd[2018]: time="2025-12-16T12:45:26.789553788Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.368930025s" Dec 16 12:45:26.789784 containerd[2018]: time="2025-12-16T12:45:26.789585429Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 12:45:26.790136 containerd[2018]: time="2025-12-16T12:45:26.790062613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:45:27.292644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194970532.mount: Deactivated successfully. Dec 16 12:45:27.307221 containerd[2018]: time="2025-12-16T12:45:27.307160220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:27.309804 containerd[2018]: time="2025-12-16T12:45:27.309743881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:45:27.312150 containerd[2018]: time="2025-12-16T12:45:27.312100302Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:27.316127 containerd[2018]: time="2025-12-16T12:45:27.316051656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:27.316334 containerd[2018]: time="2025-12-16T12:45:27.316305569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 526.218476ms" Dec 16 12:45:27.316375 containerd[2018]: time="2025-12-16T12:45:27.316336418Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:45:27.316892 containerd[2018]: time="2025-12-16T12:45:27.316850571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:45:27.868241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2189041979.mount: Deactivated successfully. Dec 16 12:45:28.522998 update_engine[1986]: I20251216 12:45:28.522910 1986 update_attempter.cc:509] Updating boot flags... Dec 16 12:45:30.091789 containerd[2018]: time="2025-12-16T12:45:30.091055161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:30.094977 containerd[2018]: time="2025-12-16T12:45:30.094909760Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Dec 16 12:45:30.097752 containerd[2018]: time="2025-12-16T12:45:30.097698940Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:30.101821 containerd[2018]: time="2025-12-16T12:45:30.101761314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:30.103899 containerd[2018]: time="2025-12-16T12:45:30.102465753Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.785571709s" Dec 16 12:45:30.103899 containerd[2018]: time="2025-12-16T12:45:30.102504122Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 12:45:33.491113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:33.491652 systemd[1]: kubelet.service: Consumed 118ms CPU time, 107.2M memory peak. Dec 16 12:45:33.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:33.500118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:33.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:33.521482 kernel: audit: type=1130 audit(1765889133.490:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:33.521585 kernel: audit: type=1131 audit(1765889133.490:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:33.540403 systemd[1]: Reload requested from client PID 3141 ('systemctl') (unit session-9.scope)... Dec 16 12:45:33.540420 systemd[1]: Reloading... Dec 16 12:45:33.658961 zram_generator::config[3214]: No configuration found. Dec 16 12:45:33.803032 systemd[1]: Reloading finished in 262 ms. Dec 16 12:45:33.835000 audit: BPF prog-id=86 op=LOAD Dec 16 12:45:33.835000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:45:33.847629 kernel: audit: type=1334 audit(1765889133.835:305): prog-id=86 op=LOAD Dec 16 12:45:33.847671 kernel: audit: type=1334 audit(1765889133.835:306): prog-id=82 op=UNLOAD Dec 16 12:45:33.835000 audit: BPF prog-id=87 op=LOAD Dec 16 12:45:33.851740 kernel: audit: type=1334 audit(1765889133.835:307): prog-id=87 op=LOAD Dec 16 12:45:33.835000 audit: BPF prog-id=88 op=LOAD Dec 16 12:45:33.856264 kernel: audit: type=1334 audit(1765889133.835:308): prog-id=88 op=LOAD Dec 16 12:45:33.856301 kernel: audit: type=1334 audit(1765889133.835:309): prog-id=83 op=UNLOAD Dec 16 12:45:33.835000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:45:33.835000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:45:33.864850 kernel: audit: type=1334 audit(1765889133.835:310): prog-id=84 op=UNLOAD Dec 16 12:45:33.836000 audit: BPF prog-id=89 op=LOAD Dec 16 12:45:33.869501 kernel: audit: type=1334 audit(1765889133.836:311): prog-id=89 op=LOAD Dec 16 12:45:33.836000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:45:33.874217 kernel: audit: type=1334 audit(1765889133.836:312): prog-id=66 op=UNLOAD Dec 16 12:45:33.841000 audit: BPF prog-id=90 op=LOAD Dec 16 12:45:33.841000 audit: BPF prog-id=91 op=LOAD Dec 16 12:45:33.841000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:45:33.841000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:45:33.841000 audit: BPF prog-id=92 op=LOAD Dec 16 12:45:33.841000 audit: BPF prog-id=93 op=LOAD Dec 16 12:45:33.841000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:45:33.841000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:45:33.842000 audit: BPF prog-id=94 op=LOAD Dec 16 12:45:33.875000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:45:33.875000 audit: BPF prog-id=95 op=LOAD Dec 16 12:45:33.876000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:45:33.876000 audit: BPF prog-id=96 op=LOAD Dec 16 12:45:33.876000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:45:33.876000 audit: BPF prog-id=97 op=LOAD Dec 16 12:45:33.876000 audit: BPF prog-id=98 op=LOAD Dec 16 12:45:33.876000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:45:33.876000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:45:33.877000 audit: BPF prog-id=99 op=LOAD Dec 16 12:45:33.877000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:45:33.877000 audit: BPF prog-id=100 op=LOAD Dec 16 12:45:33.877000 audit: BPF prog-id=101 op=LOAD Dec 16 12:45:33.877000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:45:33.877000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:45:33.878000 audit: BPF prog-id=102 op=LOAD Dec 16 12:45:33.878000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:45:33.878000 audit: BPF prog-id=103 op=LOAD Dec 16 12:45:33.878000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:45:33.879000 audit: BPF prog-id=104 op=LOAD Dec 16 12:45:33.879000 audit: BPF prog-id=105 op=LOAD Dec 16 12:45:33.879000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:45:33.879000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:45:33.891393 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:45:33.891463 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:45:33.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:33.891771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:33.891903 systemd[1]: kubelet.service: Consumed 84ms CPU time, 95.1M memory peak. Dec 16 12:45:33.896283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:34.067253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:34.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:34.078391 (kubelet)[3258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:45:34.108917 kubelet[3258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:34.108917 kubelet[3258]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:45:34.108917 kubelet[3258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:34.108917 kubelet[3258]: I1216 12:45:34.108436 3258 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:45:34.403997 kubelet[3258]: I1216 12:45:34.402130 3258 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:45:34.403997 kubelet[3258]: I1216 12:45:34.402167 3258 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:45:34.403997 kubelet[3258]: I1216 12:45:34.402511 3258 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:45:34.428228 kubelet[3258]: E1216 12:45:34.428185 3258 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:45:34.429319 kubelet[3258]: I1216 12:45:34.429295 3258 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:45:34.435120 kubelet[3258]: I1216 12:45:34.435099 3258 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:45:34.438764 kubelet[3258]: I1216 12:45:34.438732 3258 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:45:34.439978 kubelet[3258]: I1216 12:45:34.439927 3258 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:45:34.440204 kubelet[3258]: I1216 12:45:34.440064 3258 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-6d618b7fe6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:45:34.440340 kubelet[3258]: I1216 12:45:34.440328 3258 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:45:34.440386 kubelet[3258]: I1216 12:45:34.440379 3258 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:45:34.441226 kubelet[3258]: I1216 12:45:34.441205 3258 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:34.444366 kubelet[3258]: I1216 12:45:34.444343 3258 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:45:34.444486 kubelet[3258]: I1216 12:45:34.444473 3258 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:45:34.444555 kubelet[3258]: I1216 12:45:34.444547 3258 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:45:34.445801 kubelet[3258]: I1216 12:45:34.445778 3258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:45:34.450022 kubelet[3258]: I1216 12:45:34.449993 3258 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:45:34.450456 kubelet[3258]: I1216 12:45:34.450438 3258 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:45:34.450507 kubelet[3258]: W1216 12:45:34.450498 3258 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:45:34.452746 kubelet[3258]: I1216 12:45:34.452725 3258 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:45:34.452815 kubelet[3258]: I1216 12:45:34.452767 3258 server.go:1289] "Started kubelet" Dec 16 12:45:34.453384 kubelet[3258]: E1216 12:45:34.452956 3258 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-6d618b7fe6&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:45:34.456047 kubelet[3258]: I1216 12:45:34.456015 3258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:45:34.458832 kubelet[3258]: E1216 12:45:34.457672 3258 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-a-6d618b7fe6.1881b2cfcbc71a76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-6d618b7fe6,UID:ci-4515.1.0-a-6d618b7fe6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-6d618b7fe6,},FirstTimestamp:2025-12-16 12:45:34.452742774 +0000 UTC m=+0.370738644,LastTimestamp:2025-12-16 12:45:34.452742774 +0000 UTC m=+0.370738644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-6d618b7fe6,}" Dec 16 12:45:34.460047 kubelet[3258]: E1216 12:45:34.460019 3258 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:45:34.461207 kubelet[3258]: I1216 12:45:34.461170 3258 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:45:34.461915 kubelet[3258]: I1216 12:45:34.461860 3258 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:45:34.462086 kubelet[3258]: E1216 12:45:34.462051 3258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" Dec 16 12:45:34.463190 kubelet[3258]: I1216 12:45:34.463167 3258 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:45:34.463518 kubelet[3258]: I1216 12:45:34.463122 3258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:45:34.463761 kubelet[3258]: I1216 12:45:34.463736 3258 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:45:34.463814 kubelet[3258]: I1216 12:45:34.463801 3258 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:45:34.464000 audit[3273]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:34.464000 audit[3273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdc917b90 a2=0 a3=0 items=0 ppid=3258 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:45:34.466039 kubelet[3258]: I1216 12:45:34.465641 3258 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:45:34.465000 audit[3274]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:34.465000 audit[3274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeaa13ba0 a2=0 a3=0 items=0 ppid=3258 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:45:34.469059 kubelet[3258]: I1216 12:45:34.469031 3258 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:45:34.469193 kubelet[3258]: E1216 12:45:34.469165 3258 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:45:34.469339 kubelet[3258]: E1216 12:45:34.469313 3258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-6d618b7fe6?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Dec 16 12:45:34.469605 kubelet[3258]: I1216 12:45:34.469585 3258 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:45:34.469761 kubelet[3258]: I1216 12:45:34.469745 3258 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:45:34.468000 audit[3276]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:34.468000 audit[3276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffec38f3a0 a2=0 a3=0 items=0 ppid=3258 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:34.471662 kubelet[3258]: E1216 12:45:34.471594 3258 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:45:34.471926 kubelet[3258]: I1216 12:45:34.471824 3258 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:45:34.472000 audit[3278]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:34.472000 audit[3278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffefbc5800 a2=0 a3=0 items=0 ppid=3258 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:34.493741 kubelet[3258]: I1216 12:45:34.493713 3258 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:45:34.493906 kubelet[3258]: I1216 12:45:34.493891 3258 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:45:34.493967 kubelet[3258]: I1216 12:45:34.493959 3258 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:34.563036 kubelet[3258]: E1216 12:45:34.562990 3258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" Dec 16 12:45:34.663523 kubelet[3258]: E1216 12:45:34.663408 3258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" Dec 16 12:45:34.670098 kubelet[3258]: E1216 12:45:34.670054 3258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-6d618b7fe6?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Dec 16 12:45:34.698172 kubelet[3258]: I1216 12:45:34.698131 3258 policy_none.go:49] "None policy: Start" Dec 16 12:45:34.698172 kubelet[3258]: I1216 12:45:34.698174 3258 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:45:34.698311 kubelet[3258]: I1216 12:45:34.698194 3258 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:45:34.709957 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:45:34.713000 audit[3286]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:34.713000 audit[3286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdc04e790 a2=0 a3=0 items=0 ppid=3258 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:45:34.714758 kubelet[3258]: I1216 12:45:34.714716 3258 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:45:34.715000 audit[3287]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:34.715000 audit[3287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc44d0c80 a2=0 a3=0 items=0 ppid=3258 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:45:34.716000 audit[3288]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:34.716000 audit[3288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3c378b0 a2=0 a3=0 items=0 ppid=3258 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:45:34.718892 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:45:34.720149 kubelet[3258]: I1216 12:45:34.720125 3258 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:45:34.720149 kubelet[3258]: I1216 12:45:34.720148 3258 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:45:34.720240 kubelet[3258]: I1216 12:45:34.720167 3258 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:45:34.720240 kubelet[3258]: I1216 12:45:34.720172 3258 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:45:34.719000 audit[3289]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:34.719000 audit[3289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdaf8f6b0 a2=0 a3=0 items=0 ppid=3258 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:45:34.720684 kubelet[3258]: E1216 12:45:34.720514 3258 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:45:34.722388 kubelet[3258]: E1216 12:45:34.722355 3258 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:45:34.721000 audit[3291]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3291 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:34.721000 audit[3291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc357d10 a2=0 a3=0 items=0 ppid=3258 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:45:34.722000 audit[3290]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:34.722000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb1098b0 a2=0 a3=0 items=0 ppid=3258 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:45:34.724000 audit[3292]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:34.724000 audit[3292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc295c200 a2=0 a3=0 items=0 ppid=3258 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.724000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:45:34.726015 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:45:34.727000 audit[3293]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:34.727000 audit[3293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc91862f0 a2=0 a3=0 items=0 ppid=3258 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:34.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:45:34.735528 kubelet[3258]: E1216 12:45:34.735028 3258 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:45:34.735528 kubelet[3258]: I1216 12:45:34.735250 3258 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:45:34.735528 kubelet[3258]: I1216 12:45:34.735270 3258 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:45:34.737643 kubelet[3258]: E1216 12:45:34.737257 3258 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:45:34.737643 kubelet[3258]: E1216 12:45:34.737294 3258 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-a-6d618b7fe6\" not found" Dec 16 12:45:34.738252 kubelet[3258]: I1216 12:45:34.738236 3258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:45:34.837857 kubelet[3258]: I1216 12:45:34.837831 3258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.838425 kubelet[3258]: E1216 12:45:34.838399 3258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.871575 kubelet[3258]: I1216 12:45:34.871489 3258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f717d87e98271c17d6408d55147f6cc-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-6d618b7fe6\" (UID: \"3f717d87e98271c17d6408d55147f6cc\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.871575 kubelet[3258]: I1216 12:45:34.871516 3258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f717d87e98271c17d6408d55147f6cc-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-6d618b7fe6\" (UID: \"3f717d87e98271c17d6408d55147f6cc\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.871575 kubelet[3258]: I1216 12:45:34.871530 3258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f717d87e98271c17d6408d55147f6cc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-6d618b7fe6\" (UID: \"3f717d87e98271c17d6408d55147f6cc\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.875335 systemd[1]: Created slice kubepods-burstable-pod3f717d87e98271c17d6408d55147f6cc.slice - libcontainer container kubepods-burstable-pod3f717d87e98271c17d6408d55147f6cc.slice. Dec 16 12:45:34.884336 kubelet[3258]: E1216 12:45:34.884302 3258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.890386 systemd[1]: Created slice kubepods-burstable-podb762bbb22a0e3162fa2a222c27cabb05.slice - libcontainer container kubepods-burstable-podb762bbb22a0e3162fa2a222c27cabb05.slice. Dec 16 12:45:34.900272 kubelet[3258]: E1216 12:45:34.900224 3258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.903940 systemd[1]: Created slice kubepods-burstable-pod8d3f97a285f9ed8d9f1f7c41feb7818f.slice - libcontainer container kubepods-burstable-pod8d3f97a285f9ed8d9f1f7c41feb7818f.slice. Dec 16 12:45:34.905864 kubelet[3258]: E1216 12:45:34.905838 3258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.972243 kubelet[3258]: I1216 12:45:34.972125 3258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.972243 kubelet[3258]: I1216 12:45:34.972165 3258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.972243 kubelet[3258]: I1216 12:45:34.972181 3258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.972243 kubelet[3258]: I1216 12:45:34.972205 3258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.972243 kubelet[3258]: I1216 12:45:34.972215 3258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:34.972418 kubelet[3258]: I1216 12:45:34.972227 3258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d3f97a285f9ed8d9f1f7c41feb7818f-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-6d618b7fe6\" (UID: \"8d3f97a285f9ed8d9f1f7c41feb7818f\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:35.040774 kubelet[3258]: I1216 12:45:35.040751 3258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:35.041365 kubelet[3258]: E1216 12:45:35.041340 3258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:35.070991 kubelet[3258]: E1216 12:45:35.070941 3258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-6d618b7fe6?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Dec 16 12:45:35.186552 containerd[2018]: time="2025-12-16T12:45:35.186510388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-6d618b7fe6,Uid:3f717d87e98271c17d6408d55147f6cc,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:35.201681 containerd[2018]: time="2025-12-16T12:45:35.201643322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-6d618b7fe6,Uid:b762bbb22a0e3162fa2a222c27cabb05,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:35.207391 containerd[2018]: time="2025-12-16T12:45:35.207331468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-6d618b7fe6,Uid:8d3f97a285f9ed8d9f1f7c41feb7818f,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:35.438641 kubelet[3258]: E1216 12:45:35.438326 3258 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:45:35.443402 kubelet[3258]: I1216 12:45:35.443356 3258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:35.443761 kubelet[3258]: E1216 12:45:35.443733 3258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:35.810749 kubelet[3258]: E1216 12:45:35.810702 3258 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:45:35.871736 kubelet[3258]: E1216 12:45:35.871694 3258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-6d618b7fe6?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Dec 16 12:45:35.904439 kubelet[3258]: E1216 12:45:35.904391 3258 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-6d618b7fe6&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:45:36.196105 kubelet[3258]: E1216 12:45:36.195967 3258 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:45:36.216842 containerd[2018]: time="2025-12-16T12:45:36.216243500Z" level=info msg="connecting to shim 42ae4c2413f4cadfdea913e5c1982cd2b57f9a8d5acfca669e585eda6c39d8c0" address="unix:///run/containerd/s/21218ca7219f48e60effef747ed71cb8d6845b35723ffec78d408cf1235dd973" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:36.230434 containerd[2018]: time="2025-12-16T12:45:36.230386728Z" level=info msg="connecting to shim 48b8b4126908bb0a07d2a24d3960a8f643456c5f54c78eeb421d2b48bd481395" address="unix:///run/containerd/s/5e542d20608fbf22c3047648f599201080ef36e16833ebf1c895133772db908e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:36.235153 containerd[2018]: time="2025-12-16T12:45:36.233863055Z" level=info msg="connecting to shim a8eba5fce1b5828c3ca2e0c4fb3bcd9d4abdd4e6e61cd19745c31496610eda53" address="unix:///run/containerd/s/9f487712e46802f3fe3b2cc09c37d5308eaf1544ec01f88ce8ae716cdaacb4a5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:36.246449 kubelet[3258]: I1216 12:45:36.246029 3258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:36.246449 kubelet[3258]: E1216 12:45:36.246388 3258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:36.262194 systemd[1]: Started cri-containerd-a8eba5fce1b5828c3ca2e0c4fb3bcd9d4abdd4e6e61cd19745c31496610eda53.scope - libcontainer container a8eba5fce1b5828c3ca2e0c4fb3bcd9d4abdd4e6e61cd19745c31496610eda53. Dec 16 12:45:36.266827 systemd[1]: Started cri-containerd-42ae4c2413f4cadfdea913e5c1982cd2b57f9a8d5acfca669e585eda6c39d8c0.scope - libcontainer container 42ae4c2413f4cadfdea913e5c1982cd2b57f9a8d5acfca669e585eda6c39d8c0. Dec 16 12:45:36.274819 systemd[1]: Started cri-containerd-48b8b4126908bb0a07d2a24d3960a8f643456c5f54c78eeb421d2b48bd481395.scope - libcontainer container 48b8b4126908bb0a07d2a24d3960a8f643456c5f54c78eeb421d2b48bd481395. Dec 16 12:45:36.282000 audit: BPF prog-id=106 op=LOAD Dec 16 12:45:36.282000 audit: BPF prog-id=107 op=LOAD Dec 16 12:45:36.282000 audit[3356]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3338 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138656261356663653162353832386333636132653063346662336263 Dec 16 12:45:36.283000 audit: BPF prog-id=107 op=UNLOAD Dec 16 12:45:36.283000 audit[3356]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138656261356663653162353832386333636132653063346662336263 Dec 16 12:45:36.283000 audit: BPF prog-id=108 op=LOAD Dec 16 12:45:36.283000 audit[3356]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3338 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138656261356663653162353832386333636132653063346662336263 Dec 16 12:45:36.284000 audit: BPF prog-id=109 op=LOAD Dec 16 12:45:36.284000 audit[3356]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3338 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138656261356663653162353832386333636132653063346662336263 Dec 16 12:45:36.284000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:45:36.284000 audit[3356]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138656261356663653162353832386333636132653063346662336263 Dec 16 12:45:36.284000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:45:36.284000 audit[3356]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.284000 audit: BPF prog-id=110 op=LOAD Dec 16 12:45:36.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138656261356663653162353832386333636132653063346662336263 Dec 16 12:45:36.284000 audit: BPF prog-id=111 op=LOAD Dec 16 12:45:36.284000 audit[3356]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3338 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.285000 audit: BPF prog-id=112 op=LOAD Dec 16 12:45:36.285000 audit[3335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3301 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432616534633234313366346361646664656139313365356331393832 Dec 16 12:45:36.285000 audit: BPF prog-id=112 op=UNLOAD Dec 16 12:45:36.285000 audit[3335]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3301 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432616534633234313366346361646664656139313365356331393832 Dec 16 12:45:36.285000 audit: BPF prog-id=113 op=LOAD Dec 16 12:45:36.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138656261356663653162353832386333636132653063346662336263 Dec 16 12:45:36.285000 audit[3335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3301 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432616534633234313366346361646664656139313365356331393832 Dec 16 12:45:36.285000 audit: BPF prog-id=114 op=LOAD Dec 16 12:45:36.285000 audit[3335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3301 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432616534633234313366346361646664656139313365356331393832 Dec 16 12:45:36.286000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:45:36.286000 audit[3335]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3301 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432616534633234313366346361646664656139313365356331393832 Dec 16 12:45:36.286000 audit: BPF prog-id=113 op=UNLOAD Dec 16 12:45:36.286000 audit[3335]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3301 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432616534633234313366346361646664656139313365356331393832 Dec 16 12:45:36.286000 audit: BPF prog-id=115 op=LOAD Dec 16 12:45:36.286000 audit[3335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3301 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432616534633234313366346361646664656139313365356331393832 Dec 16 12:45:36.291000 audit: BPF prog-id=116 op=LOAD Dec 16 12:45:36.292000 audit: BPF prog-id=117 op=LOAD Dec 16 12:45:36.292000 audit[3358]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3326 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623862343132363930386262306130376432613234643339363061 Dec 16 12:45:36.292000 audit: BPF prog-id=117 op=UNLOAD Dec 16 12:45:36.292000 audit[3358]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623862343132363930386262306130376432613234643339363061 Dec 16 12:45:36.292000 audit: BPF prog-id=118 op=LOAD Dec 16 12:45:36.292000 audit[3358]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3326 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623862343132363930386262306130376432613234643339363061 Dec 16 12:45:36.292000 audit: BPF prog-id=119 op=LOAD Dec 16 12:45:36.292000 audit[3358]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3326 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623862343132363930386262306130376432613234643339363061 Dec 16 12:45:36.292000 audit: BPF prog-id=119 op=UNLOAD Dec 16 12:45:36.292000 audit[3358]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623862343132363930386262306130376432613234643339363061 Dec 16 12:45:36.292000 audit: BPF prog-id=118 op=UNLOAD Dec 16 12:45:36.292000 audit[3358]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623862343132363930386262306130376432613234643339363061 Dec 16 12:45:36.292000 audit: BPF prog-id=120 op=LOAD Dec 16 12:45:36.292000 audit[3358]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3326 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623862343132363930386262306130376432613234643339363061 Dec 16 12:45:36.328407 containerd[2018]: time="2025-12-16T12:45:36.328319369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-6d618b7fe6,Uid:3f717d87e98271c17d6408d55147f6cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8eba5fce1b5828c3ca2e0c4fb3bcd9d4abdd4e6e61cd19745c31496610eda53\"" Dec 16 12:45:36.332755 containerd[2018]: time="2025-12-16T12:45:36.332722055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-6d618b7fe6,Uid:b762bbb22a0e3162fa2a222c27cabb05,Namespace:kube-system,Attempt:0,} returns sandbox id \"42ae4c2413f4cadfdea913e5c1982cd2b57f9a8d5acfca669e585eda6c39d8c0\"" Dec 16 12:45:36.337683 containerd[2018]: time="2025-12-16T12:45:36.337202625Z" level=info msg="CreateContainer within sandbox \"a8eba5fce1b5828c3ca2e0c4fb3bcd9d4abdd4e6e61cd19745c31496610eda53\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:45:36.339504 containerd[2018]: time="2025-12-16T12:45:36.339450534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-6d618b7fe6,Uid:8d3f97a285f9ed8d9f1f7c41feb7818f,Namespace:kube-system,Attempt:0,} returns sandbox id \"48b8b4126908bb0a07d2a24d3960a8f643456c5f54c78eeb421d2b48bd481395\"" Dec 16 12:45:36.341961 containerd[2018]: time="2025-12-16T12:45:36.341931899Z" level=info msg="CreateContainer within sandbox \"42ae4c2413f4cadfdea913e5c1982cd2b57f9a8d5acfca669e585eda6c39d8c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:45:36.347297 containerd[2018]: time="2025-12-16T12:45:36.347255777Z" level=info msg="CreateContainer within sandbox \"48b8b4126908bb0a07d2a24d3960a8f643456c5f54c78eeb421d2b48bd481395\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:45:36.371224 containerd[2018]: time="2025-12-16T12:45:36.371179924Z" level=info msg="Container e038eafe557bb456b4ef6b52ed8a8840cf72604a9231feaf9acd528b2c2038d0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:36.376028 containerd[2018]: time="2025-12-16T12:45:36.375978520Z" level=info msg="Container d428213372e7ab127ffadf528e3953270d7650dd19ffa12b2ae2a000321ae6d6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:36.402940 containerd[2018]: time="2025-12-16T12:45:36.402319414Z" level=info msg="Container 521903d5e9020c6f3c88af33c769f83312de251579cb0d2cbfa9762ee1c532ae: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:36.414959 containerd[2018]: time="2025-12-16T12:45:36.414911245Z" level=info msg="CreateContainer within sandbox \"a8eba5fce1b5828c3ca2e0c4fb3bcd9d4abdd4e6e61cd19745c31496610eda53\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e038eafe557bb456b4ef6b52ed8a8840cf72604a9231feaf9acd528b2c2038d0\"" Dec 16 12:45:36.415923 containerd[2018]: time="2025-12-16T12:45:36.415861869Z" level=info msg="CreateContainer within sandbox \"42ae4c2413f4cadfdea913e5c1982cd2b57f9a8d5acfca669e585eda6c39d8c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d428213372e7ab127ffadf528e3953270d7650dd19ffa12b2ae2a000321ae6d6\"" Dec 16 12:45:36.416112 containerd[2018]: time="2025-12-16T12:45:36.416088237Z" level=info msg="StartContainer for \"e038eafe557bb456b4ef6b52ed8a8840cf72604a9231feaf9acd528b2c2038d0\"" Dec 16 12:45:36.417119 containerd[2018]: time="2025-12-16T12:45:36.417085831Z" level=info msg="connecting to shim e038eafe557bb456b4ef6b52ed8a8840cf72604a9231feaf9acd528b2c2038d0" address="unix:///run/containerd/s/9f487712e46802f3fe3b2cc09c37d5308eaf1544ec01f88ce8ae716cdaacb4a5" protocol=ttrpc version=3 Dec 16 12:45:36.417541 containerd[2018]: time="2025-12-16T12:45:36.417517526Z" level=info msg="StartContainer for \"d428213372e7ab127ffadf528e3953270d7650dd19ffa12b2ae2a000321ae6d6\"" Dec 16 12:45:36.418575 containerd[2018]: time="2025-12-16T12:45:36.418500424Z" level=info msg="connecting to shim d428213372e7ab127ffadf528e3953270d7650dd19ffa12b2ae2a000321ae6d6" address="unix:///run/containerd/s/21218ca7219f48e60effef747ed71cb8d6845b35723ffec78d408cf1235dd973" protocol=ttrpc version=3 Dec 16 12:45:36.427068 containerd[2018]: time="2025-12-16T12:45:36.426896119Z" level=info msg="CreateContainer within sandbox \"48b8b4126908bb0a07d2a24d3960a8f643456c5f54c78eeb421d2b48bd481395\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"521903d5e9020c6f3c88af33c769f83312de251579cb0d2cbfa9762ee1c532ae\"" Dec 16 12:45:36.429000 containerd[2018]: time="2025-12-16T12:45:36.427958236Z" level=info msg="StartContainer for \"521903d5e9020c6f3c88af33c769f83312de251579cb0d2cbfa9762ee1c532ae\"" Dec 16 12:45:36.429000 containerd[2018]: time="2025-12-16T12:45:36.428861690Z" level=info msg="connecting to shim 521903d5e9020c6f3c88af33c769f83312de251579cb0d2cbfa9762ee1c532ae" address="unix:///run/containerd/s/5e542d20608fbf22c3047648f599201080ef36e16833ebf1c895133772db908e" protocol=ttrpc version=3 Dec 16 12:45:36.441779 systemd[1]: Started cri-containerd-d428213372e7ab127ffadf528e3953270d7650dd19ffa12b2ae2a000321ae6d6.scope - libcontainer container d428213372e7ab127ffadf528e3953270d7650dd19ffa12b2ae2a000321ae6d6. Dec 16 12:45:36.445593 systemd[1]: Started cri-containerd-e038eafe557bb456b4ef6b52ed8a8840cf72604a9231feaf9acd528b2c2038d0.scope - libcontainer container e038eafe557bb456b4ef6b52ed8a8840cf72604a9231feaf9acd528b2c2038d0. Dec 16 12:45:36.460042 systemd[1]: Started cri-containerd-521903d5e9020c6f3c88af33c769f83312de251579cb0d2cbfa9762ee1c532ae.scope - libcontainer container 521903d5e9020c6f3c88af33c769f83312de251579cb0d2cbfa9762ee1c532ae. Dec 16 12:45:36.467000 audit: BPF prog-id=121 op=LOAD Dec 16 12:45:36.467000 audit: BPF prog-id=122 op=LOAD Dec 16 12:45:36.467000 audit[3431]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3338 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333865616665353537626234353662346566366235326564386138 Dec 16 12:45:36.468000 audit: BPF prog-id=122 op=UNLOAD Dec 16 12:45:36.468000 audit[3431]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333865616665353537626234353662346566366235326564386138 Dec 16 12:45:36.468000 audit: BPF prog-id=123 op=LOAD Dec 16 12:45:36.468000 audit[3431]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3338 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333865616665353537626234353662346566366235326564386138 Dec 16 12:45:36.468000 audit: BPF prog-id=124 op=LOAD Dec 16 12:45:36.468000 audit[3431]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3338 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333865616665353537626234353662346566366235326564386138 Dec 16 12:45:36.468000 audit: BPF prog-id=124 op=UNLOAD Dec 16 12:45:36.468000 audit[3431]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333865616665353537626234353662346566366235326564386138 Dec 16 12:45:36.468000 audit: BPF prog-id=123 op=UNLOAD Dec 16 12:45:36.468000 audit[3431]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3338 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333865616665353537626234353662346566366235326564386138 Dec 16 12:45:36.468000 audit: BPF prog-id=125 op=LOAD Dec 16 12:45:36.468000 audit[3431]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3338 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530333865616665353537626234353662346566366235326564386138 Dec 16 12:45:36.476000 audit: BPF prog-id=126 op=LOAD Dec 16 12:45:36.477000 audit: BPF prog-id=127 op=LOAD Dec 16 12:45:36.477000 audit[3432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3301 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323832313333373265376162313237666661646635323865333935 Dec 16 12:45:36.477000 audit: BPF prog-id=127 op=UNLOAD Dec 16 12:45:36.477000 audit[3432]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3301 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323832313333373265376162313237666661646635323865333935 Dec 16 12:45:36.477000 audit: BPF prog-id=128 op=LOAD Dec 16 12:45:36.477000 audit[3432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3301 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323832313333373265376162313237666661646635323865333935 Dec 16 12:45:36.477000 audit: BPF prog-id=129 op=LOAD Dec 16 12:45:36.477000 audit[3432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3301 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323832313333373265376162313237666661646635323865333935 Dec 16 12:45:36.478000 audit: BPF prog-id=129 op=UNLOAD Dec 16 12:45:36.478000 audit[3432]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3301 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323832313333373265376162313237666661646635323865333935 Dec 16 12:45:36.478000 audit: BPF prog-id=128 op=UNLOAD Dec 16 12:45:36.478000 audit[3432]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3301 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323832313333373265376162313237666661646635323865333935 Dec 16 12:45:36.478000 audit: BPF prog-id=130 op=LOAD Dec 16 12:45:36.478000 audit[3432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3301 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323832313333373265376162313237666661646635323865333935 Dec 16 12:45:36.482000 audit: BPF prog-id=131 op=LOAD Dec 16 12:45:36.485000 audit: BPF prog-id=132 op=LOAD Dec 16 12:45:36.485000 audit[3454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3326 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313930336435653930323063366633633838616633336337363966 Dec 16 12:45:36.485000 audit: BPF prog-id=132 op=UNLOAD Dec 16 12:45:36.485000 audit[3454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313930336435653930323063366633633838616633336337363966 Dec 16 12:45:36.485000 audit: BPF prog-id=133 op=LOAD Dec 16 12:45:36.485000 audit[3454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3326 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313930336435653930323063366633633838616633336337363966 Dec 16 12:45:36.485000 audit: BPF prog-id=134 op=LOAD Dec 16 12:45:36.485000 audit[3454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3326 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313930336435653930323063366633633838616633336337363966 Dec 16 12:45:36.485000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:45:36.485000 audit[3454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313930336435653930323063366633633838616633336337363966 Dec 16 12:45:36.485000 audit: BPF prog-id=133 op=UNLOAD Dec 16 12:45:36.485000 audit[3454]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3326 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313930336435653930323063366633633838616633336337363966 Dec 16 12:45:36.485000 audit: BPF prog-id=135 op=LOAD Dec 16 12:45:36.485000 audit[3454]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3326 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:36.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532313930336435653930323063366633633838616633336337363966 Dec 16 12:45:36.520215 containerd[2018]: time="2025-12-16T12:45:36.520101318Z" level=info msg="StartContainer for \"e038eafe557bb456b4ef6b52ed8a8840cf72604a9231feaf9acd528b2c2038d0\" returns successfully" Dec 16 12:45:36.543585 containerd[2018]: time="2025-12-16T12:45:36.543532856Z" level=info msg="StartContainer for \"d428213372e7ab127ffadf528e3953270d7650dd19ffa12b2ae2a000321ae6d6\" returns successfully" Dec 16 12:45:36.543729 containerd[2018]: time="2025-12-16T12:45:36.543659276Z" level=info msg="StartContainer for \"521903d5e9020c6f3c88af33c769f83312de251579cb0d2cbfa9762ee1c532ae\" returns successfully" Dec 16 12:45:36.545887 kubelet[3258]: E1216 12:45:36.545843 3258 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:45:36.735912 kubelet[3258]: E1216 12:45:36.735560 3258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:36.736713 kubelet[3258]: E1216 12:45:36.736564 3258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:36.741878 kubelet[3258]: E1216 12:45:36.741842 3258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:37.743225 kubelet[3258]: E1216 12:45:37.743191 3258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:37.743608 kubelet[3258]: E1216 12:45:37.743567 3258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:37.849146 kubelet[3258]: I1216 12:45:37.849117 3258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:38.157389 kubelet[3258]: E1216 12:45:38.157330 3258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-a-6d618b7fe6\" not found" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:38.204069 kubelet[3258]: I1216 12:45:38.204025 3258 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:38.204069 kubelet[3258]: E1216 12:45:38.204071 3258 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-a-6d618b7fe6\": node \"ci-4515.1.0-a-6d618b7fe6\" not found" Dec 16 12:45:38.231880 kubelet[3258]: E1216 12:45:38.231829 3258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-6d618b7fe6\" not found" Dec 16 12:45:38.362521 kubelet[3258]: I1216 12:45:38.362471 3258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:38.371602 kubelet[3258]: E1216 12:45:38.371557 3258 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-6d618b7fe6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:38.371602 kubelet[3258]: I1216 12:45:38.371610 3258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:38.373619 kubelet[3258]: E1216 12:45:38.373591 3258 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:38.373681 kubelet[3258]: I1216 12:45:38.373621 3258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:38.375106 kubelet[3258]: E1216 12:45:38.375031 3258 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-6d618b7fe6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:38.455300 kubelet[3258]: I1216 12:45:38.455184 3258 apiserver.go:52] "Watching apiserver" Dec 16 12:45:38.462071 kubelet[3258]: I1216 12:45:38.462029 3258 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:45:39.282030 kubelet[3258]: I1216 12:45:39.281986 3258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:39.290368 kubelet[3258]: I1216 12:45:39.290323 3258 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:40.439940 kubelet[3258]: I1216 12:45:40.439674 3258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:40.444889 kubelet[3258]: I1216 12:45:40.444845 3258 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:40.637223 systemd[1]: Reload requested from client PID 3538 ('systemctl') (unit session-9.scope)... Dec 16 12:45:40.637546 systemd[1]: Reloading... Dec 16 12:45:40.770977 zram_generator::config[3594]: No configuration found. Dec 16 12:45:40.949004 systemd[1]: Reloading finished in 310 ms. Dec 16 12:45:40.973382 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:40.992953 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:45:40.993217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:41.011662 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 12:45:41.011781 kernel: audit: type=1131 audit(1765889140.992:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:40.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:40.993293 systemd[1]: kubelet.service: Consumed 675ms CPU time, 124.9M memory peak. Dec 16 12:45:40.999170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:41.011000 audit: BPF prog-id=136 op=LOAD Dec 16 12:45:41.020900 kernel: audit: type=1334 audit(1765889141.011:408): prog-id=136 op=LOAD Dec 16 12:45:41.021000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:45:41.032414 kernel: audit: type=1334 audit(1765889141.021:409): prog-id=102 op=UNLOAD Dec 16 12:45:41.032459 kernel: audit: type=1334 audit(1765889141.025:410): prog-id=137 op=LOAD Dec 16 12:45:41.025000 audit: BPF prog-id=137 op=LOAD Dec 16 12:45:41.025000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:45:41.038351 kernel: audit: type=1334 audit(1765889141.025:411): prog-id=96 op=UNLOAD Dec 16 12:45:41.030000 audit: BPF prog-id=138 op=LOAD Dec 16 12:45:41.042714 kernel: audit: type=1334 audit(1765889141.030:412): prog-id=138 op=LOAD Dec 16 12:45:41.030000 audit: BPF prog-id=139 op=LOAD Dec 16 12:45:41.047333 kernel: audit: type=1334 audit(1765889141.030:413): prog-id=139 op=LOAD Dec 16 12:45:41.030000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:45:41.052831 kernel: audit: type=1334 audit(1765889141.030:414): prog-id=97 op=UNLOAD Dec 16 12:45:41.030000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:45:41.058120 kernel: audit: type=1334 audit(1765889141.030:415): prog-id=98 op=UNLOAD Dec 16 12:45:41.031000 audit: BPF prog-id=140 op=LOAD Dec 16 12:45:41.064110 kernel: audit: type=1334 audit(1765889141.031:416): prog-id=140 op=LOAD Dec 16 12:45:41.031000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:45:41.031000 audit: BPF prog-id=141 op=LOAD Dec 16 12:45:41.031000 audit: BPF prog-id=142 op=LOAD Dec 16 12:45:41.031000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:45:41.031000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:45:41.031000 audit: BPF prog-id=143 op=LOAD Dec 16 12:45:41.031000 audit: BPF prog-id=144 op=LOAD Dec 16 12:45:41.031000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:45:41.031000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:45:41.037000 audit: BPF prog-id=145 op=LOAD Dec 16 12:45:41.037000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:45:41.037000 audit: BPF prog-id=146 op=LOAD Dec 16 12:45:41.037000 audit: BPF prog-id=147 op=LOAD Dec 16 12:45:41.037000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:45:41.037000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:45:41.041000 audit: BPF prog-id=148 op=LOAD Dec 16 12:45:41.041000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:45:41.046000 audit: BPF prog-id=149 op=LOAD Dec 16 12:45:41.046000 audit: BPF prog-id=95 op=UNLOAD Dec 16 12:45:41.052000 audit: BPF prog-id=150 op=LOAD Dec 16 12:45:41.052000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:45:41.052000 audit: BPF prog-id=151 op=LOAD Dec 16 12:45:41.056000 audit: BPF prog-id=152 op=LOAD Dec 16 12:45:41.056000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:45:41.056000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:45:41.057000 audit: BPF prog-id=153 op=LOAD Dec 16 12:45:41.057000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:45:41.057000 audit: BPF prog-id=154 op=LOAD Dec 16 12:45:41.061000 audit: BPF prog-id=155 op=LOAD Dec 16 12:45:41.061000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:45:41.061000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:45:41.454206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:41.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:41.462343 (kubelet)[3651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:45:41.494946 kubelet[3651]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:41.494946 kubelet[3651]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:45:41.494946 kubelet[3651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:41.495323 kubelet[3651]: I1216 12:45:41.494972 3651 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:45:41.500936 kubelet[3651]: I1216 12:45:41.500895 3651 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:45:41.500936 kubelet[3651]: I1216 12:45:41.500926 3651 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:45:41.501140 kubelet[3651]: I1216 12:45:41.501123 3651 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:45:41.502263 kubelet[3651]: I1216 12:45:41.502239 3651 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:45:41.504150 kubelet[3651]: I1216 12:45:41.504121 3651 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:45:41.508359 kubelet[3651]: I1216 12:45:41.508333 3651 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:45:41.511959 kubelet[3651]: I1216 12:45:41.511901 3651 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:45:41.512155 kubelet[3651]: I1216 12:45:41.512124 3651 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:45:41.512406 kubelet[3651]: I1216 12:45:41.512177 3651 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-6d618b7fe6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:45:41.512406 kubelet[3651]: I1216 12:45:41.512402 3651 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:45:41.512520 kubelet[3651]: I1216 12:45:41.512416 3651 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:45:41.512520 kubelet[3651]: I1216 12:45:41.512454 3651 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:41.512796 kubelet[3651]: I1216 12:45:41.512581 3651 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:45:41.512796 kubelet[3651]: I1216 12:45:41.512598 3651 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:45:41.512796 kubelet[3651]: I1216 12:45:41.512618 3651 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:45:41.512796 kubelet[3651]: I1216 12:45:41.512628 3651 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:45:41.516947 kubelet[3651]: I1216 12:45:41.516920 3651 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:45:41.517373 kubelet[3651]: I1216 12:45:41.517353 3651 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:45:41.519352 kubelet[3651]: I1216 12:45:41.519332 3651 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:45:41.519421 kubelet[3651]: I1216 12:45:41.519391 3651 server.go:1289] "Started kubelet" Dec 16 12:45:41.523554 kubelet[3651]: I1216 12:45:41.523518 3651 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:45:41.533479 kubelet[3651]: I1216 12:45:41.533425 3651 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:45:41.535388 kubelet[3651]: I1216 12:45:41.534488 3651 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:45:41.539041 kubelet[3651]: I1216 12:45:41.539012 3651 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:45:41.544487 kubelet[3651]: I1216 12:45:41.540229 3651 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:45:41.544487 kubelet[3651]: I1216 12:45:41.540300 3651 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:45:41.551968 kubelet[3651]: I1216 12:45:41.551933 3651 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:45:41.553756 kubelet[3651]: I1216 12:45:41.553709 3651 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:45:41.560579 kubelet[3651]: E1216 12:45:41.560219 3651 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:45:41.560718 kubelet[3651]: I1216 12:45:41.560638 3651 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:45:41.560718 kubelet[3651]: I1216 12:45:41.560651 3651 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:45:41.562946 kubelet[3651]: I1216 12:45:41.562847 3651 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:45:41.565279 kubelet[3651]: I1216 12:45:41.564842 3651 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:45:41.565279 kubelet[3651]: I1216 12:45:41.564866 3651 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:45:41.565279 kubelet[3651]: I1216 12:45:41.564922 3651 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:45:41.565279 kubelet[3651]: I1216 12:45:41.564928 3651 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:45:41.565279 kubelet[3651]: E1216 12:45:41.564978 3651 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:45:41.603495 kubelet[3651]: I1216 12:45:41.603370 3651 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:45:41.603495 kubelet[3651]: I1216 12:45:41.603390 3651 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:45:41.603495 kubelet[3651]: I1216 12:45:41.603414 3651 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:41.666155 kubelet[3651]: E1216 12:45:41.666101 3651 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:45:42.047616 kubelet[3651]: E1216 12:45:41.866991 3651 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:45:42.047616 kubelet[3651]: I1216 12:45:42.046082 3651 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:45:42.047616 kubelet[3651]: I1216 12:45:42.046107 3651 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:45:42.047616 kubelet[3651]: I1216 12:45:42.046128 3651 policy_none.go:49] "None policy: Start" Dec 16 12:45:42.047616 kubelet[3651]: I1216 12:45:42.046138 3651 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:45:42.047616 kubelet[3651]: I1216 12:45:42.046148 3651 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:45:42.047616 kubelet[3651]: I1216 12:45:42.046222 3651 state_mem.go:75] "Updated machine memory state" Dec 16 12:45:42.047616 kubelet[3651]: I1216 12:45:42.046628 3651 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:45:42.047616 kubelet[3651]: I1216 12:45:42.046909 3651 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:45:42.056202 kubelet[3651]: E1216 12:45:42.054025 3651 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:45:42.056202 kubelet[3651]: I1216 12:45:42.054193 3651 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:45:42.056202 kubelet[3651]: I1216 12:45:42.054204 3651 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:45:42.056202 kubelet[3651]: I1216 12:45:42.054680 3651 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:45:42.058669 kubelet[3651]: E1216 12:45:42.058647 3651 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:45:42.164258 kubelet[3651]: I1216 12:45:42.164197 3651 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.180341 kubelet[3651]: I1216 12:45:42.180170 3651 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.181157 kubelet[3651]: I1216 12:45:42.180975 3651 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.268756 kubelet[3651]: I1216 12:45:42.268700 3651 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.269863 kubelet[3651]: I1216 12:45:42.269745 3651 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.270568 kubelet[3651]: I1216 12:45:42.269799 3651 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.282402 kubelet[3651]: I1216 12:45:42.282276 3651 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:42.283108 kubelet[3651]: I1216 12:45:42.282740 3651 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:42.283361 kubelet[3651]: E1216 12:45:42.283333 3651 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-6d618b7fe6\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.284191 kubelet[3651]: I1216 12:45:42.284047 3651 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:45:42.284191 kubelet[3651]: E1216 12:45:42.284118 3651 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-6d618b7fe6\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.348245 kubelet[3651]: I1216 12:45:42.348103 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d3f97a285f9ed8d9f1f7c41feb7818f-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-6d618b7fe6\" (UID: \"8d3f97a285f9ed8d9f1f7c41feb7818f\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.348816 kubelet[3651]: I1216 12:45:42.348684 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f717d87e98271c17d6408d55147f6cc-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-6d618b7fe6\" (UID: \"3f717d87e98271c17d6408d55147f6cc\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.348816 kubelet[3651]: I1216 12:45:42.348712 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f717d87e98271c17d6408d55147f6cc-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-6d618b7fe6\" (UID: \"3f717d87e98271c17d6408d55147f6cc\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.348816 kubelet[3651]: I1216 12:45:42.348725 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.348816 kubelet[3651]: I1216 12:45:42.348773 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.348816 kubelet[3651]: I1216 12:45:42.348783 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.349086 kubelet[3651]: I1216 12:45:42.348894 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f717d87e98271c17d6408d55147f6cc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-6d618b7fe6\" (UID: \"3f717d87e98271c17d6408d55147f6cc\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.349086 kubelet[3651]: I1216 12:45:42.348913 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.349086 kubelet[3651]: I1216 12:45:42.348924 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b762bbb22a0e3162fa2a222c27cabb05-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-6d618b7fe6\" (UID: \"b762bbb22a0e3162fa2a222c27cabb05\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" Dec 16 12:45:42.514650 kubelet[3651]: I1216 12:45:42.514412 3651 apiserver.go:52] "Watching apiserver" Dec 16 12:45:42.540933 kubelet[3651]: I1216 12:45:42.540860 3651 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:45:42.633125 kubelet[3651]: I1216 12:45:42.632821 3651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-6d618b7fe6" podStartSLOduration=0.632802017 podStartE2EDuration="632.802017ms" podCreationTimestamp="2025-12-16 12:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:42.615471431 +0000 UTC m=+1.149175005" watchObservedRunningTime="2025-12-16 12:45:42.632802017 +0000 UTC m=+1.166505583" Dec 16 12:45:42.655354 kubelet[3651]: I1216 12:45:42.652919 3651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-a-6d618b7fe6" podStartSLOduration=3.652903889 podStartE2EDuration="3.652903889s" podCreationTimestamp="2025-12-16 12:45:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:42.652822702 +0000 UTC m=+1.186526268" watchObservedRunningTime="2025-12-16 12:45:42.652903889 +0000 UTC m=+1.186607463" Dec 16 12:45:42.655354 kubelet[3651]: I1216 12:45:42.653081 3651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-a-6d618b7fe6" podStartSLOduration=2.65307239 podStartE2EDuration="2.65307239s" podCreationTimestamp="2025-12-16 12:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:42.635555061 +0000 UTC m=+1.169258627" watchObservedRunningTime="2025-12-16 12:45:42.65307239 +0000 UTC m=+1.186775956" Dec 16 12:45:45.396259 kubelet[3651]: I1216 12:45:45.396221 3651 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:45:45.396646 containerd[2018]: time="2025-12-16T12:45:45.396610306Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:45:45.396955 kubelet[3651]: I1216 12:45:45.396809 3651 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:45:46.552603 systemd[1]: Created slice kubepods-besteffort-pod48864e7d_8371_4c37_b856_3e7e89bd35b5.slice - libcontainer container kubepods-besteffort-pod48864e7d_8371_4c37_b856_3e7e89bd35b5.slice. Dec 16 12:45:46.578939 kubelet[3651]: I1216 12:45:46.578380 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48864e7d-8371-4c37-b856-3e7e89bd35b5-xtables-lock\") pod \"kube-proxy-9tfks\" (UID: \"48864e7d-8371-4c37-b856-3e7e89bd35b5\") " pod="kube-system/kube-proxy-9tfks" Dec 16 12:45:46.578939 kubelet[3651]: I1216 12:45:46.578814 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48864e7d-8371-4c37-b856-3e7e89bd35b5-lib-modules\") pod \"kube-proxy-9tfks\" (UID: \"48864e7d-8371-4c37-b856-3e7e89bd35b5\") " pod="kube-system/kube-proxy-9tfks" Dec 16 12:45:46.578939 kubelet[3651]: I1216 12:45:46.578837 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/48864e7d-8371-4c37-b856-3e7e89bd35b5-kube-proxy\") pod \"kube-proxy-9tfks\" (UID: \"48864e7d-8371-4c37-b856-3e7e89bd35b5\") " pod="kube-system/kube-proxy-9tfks" Dec 16 12:45:46.578939 kubelet[3651]: I1216 12:45:46.578850 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2vnw\" (UniqueName: \"kubernetes.io/projected/48864e7d-8371-4c37-b856-3e7e89bd35b5-kube-api-access-b2vnw\") pod \"kube-proxy-9tfks\" (UID: \"48864e7d-8371-4c37-b856-3e7e89bd35b5\") " pod="kube-system/kube-proxy-9tfks" Dec 16 12:45:46.641567 systemd[1]: Created slice kubepods-besteffort-pod3d7ae9eb_6412_4816_9de4_11b7fb7682d8.slice - libcontainer container kubepods-besteffort-pod3d7ae9eb_6412_4816_9de4_11b7fb7682d8.slice. Dec 16 12:45:46.679910 kubelet[3651]: I1216 12:45:46.679554 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5qwg\" (UniqueName: \"kubernetes.io/projected/3d7ae9eb-6412-4816-9de4-11b7fb7682d8-kube-api-access-n5qwg\") pod \"tigera-operator-7dcd859c48-d4wjh\" (UID: \"3d7ae9eb-6412-4816-9de4-11b7fb7682d8\") " pod="tigera-operator/tigera-operator-7dcd859c48-d4wjh" Dec 16 12:45:46.679910 kubelet[3651]: I1216 12:45:46.679609 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d7ae9eb-6412-4816-9de4-11b7fb7682d8-var-lib-calico\") pod \"tigera-operator-7dcd859c48-d4wjh\" (UID: \"3d7ae9eb-6412-4816-9de4-11b7fb7682d8\") " pod="tigera-operator/tigera-operator-7dcd859c48-d4wjh" Dec 16 12:45:46.860408 containerd[2018]: time="2025-12-16T12:45:46.860273324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9tfks,Uid:48864e7d-8371-4c37-b856-3e7e89bd35b5,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:46.896009 containerd[2018]: time="2025-12-16T12:45:46.895960572Z" level=info msg="connecting to shim fa5b7ea3daacea4587dc1f2b90192504c8d4f63fb9c08932579b0f0899e5cf86" address="unix:///run/containerd/s/a84363c12058deb550d2213b34e5cecca433ebd5155c0513fa446fd478d7a72c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:46.921134 systemd[1]: Started cri-containerd-fa5b7ea3daacea4587dc1f2b90192504c8d4f63fb9c08932579b0f0899e5cf86.scope - libcontainer container fa5b7ea3daacea4587dc1f2b90192504c8d4f63fb9c08932579b0f0899e5cf86. Dec 16 12:45:46.929000 audit: BPF prog-id=156 op=LOAD Dec 16 12:45:46.934191 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:45:46.934249 kernel: audit: type=1334 audit(1765889146.929:449): prog-id=156 op=LOAD Dec 16 12:45:46.938000 audit: BPF prog-id=157 op=LOAD Dec 16 12:45:46.945003 kernel: audit: type=1334 audit(1765889146.938:450): prog-id=157 op=LOAD Dec 16 12:45:46.947198 containerd[2018]: time="2025-12-16T12:45:46.947051950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-d4wjh,Uid:3d7ae9eb-6412-4816-9de4-11b7fb7682d8,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:45:46.938000 audit[3723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:46.964900 kernel: audit: type=1300 audit(1765889146.938:450): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:46.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:46.982801 kernel: audit: type=1327 audit(1765889146.938:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:46.983628 containerd[2018]: time="2025-12-16T12:45:46.983514771Z" level=info msg="connecting to shim 932e145eb4042b3393a7b6e2c6cc354fd37342351ca70c3335f87471d4e2f838" address="unix:///run/containerd/s/89e75abb6549fe81807a04857d54fde711cb1dff5386af19ab916b698d7a6d28" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:46.938000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:45:46.990448 kernel: audit: type=1334 audit(1765889146.938:451): prog-id=157 op=UNLOAD Dec 16 12:45:46.938000 audit[3723]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.007368 kernel: audit: type=1300 audit(1765889146.938:451): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:46.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:47.026204 kernel: audit: type=1327 audit(1765889146.938:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:46.938000 audit: BPF prog-id=158 op=LOAD Dec 16 12:45:47.031282 kernel: audit: type=1334 audit(1765889146.938:452): prog-id=158 op=LOAD Dec 16 12:45:46.938000 audit[3723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:46.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:47.050231 systemd[1]: Started cri-containerd-932e145eb4042b3393a7b6e2c6cc354fd37342351ca70c3335f87471d4e2f838.scope - libcontainer container 932e145eb4042b3393a7b6e2c6cc354fd37342351ca70c3335f87471d4e2f838. Dec 16 12:45:47.058585 containerd[2018]: time="2025-12-16T12:45:47.058501290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9tfks,Uid:48864e7d-8371-4c37-b856-3e7e89bd35b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa5b7ea3daacea4587dc1f2b90192504c8d4f63fb9c08932579b0f0899e5cf86\"" Dec 16 12:45:47.068003 kernel: audit: type=1300 audit(1765889146.938:452): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.068407 kernel: audit: type=1327 audit(1765889146.938:452): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:47.069041 containerd[2018]: time="2025-12-16T12:45:47.068793294Z" level=info msg="CreateContainer within sandbox \"fa5b7ea3daacea4587dc1f2b90192504c8d4f63fb9c08932579b0f0899e5cf86\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:45:46.938000 audit: BPF prog-id=159 op=LOAD Dec 16 12:45:46.938000 audit[3723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:46.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:46.938000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:45:46.938000 audit[3723]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:46.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:46.938000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:45:46.938000 audit[3723]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:46.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:46.938000 audit: BPF prog-id=160 op=LOAD Dec 16 12:45:46.938000 audit[3723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3712 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:46.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661356237656133646161636561343538376463316632623930313932 Dec 16 12:45:47.076000 audit: BPF prog-id=161 op=LOAD Dec 16 12:45:47.077000 audit: BPF prog-id=162 op=LOAD Dec 16 12:45:47.077000 audit[3762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=3750 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933326531343565623430343262333339336137623665326336636333 Dec 16 12:45:47.077000 audit: BPF prog-id=162 op=UNLOAD Dec 16 12:45:47.077000 audit[3762]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3750 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933326531343565623430343262333339336137623665326336636333 Dec 16 12:45:47.078000 audit: BPF prog-id=163 op=LOAD Dec 16 12:45:47.078000 audit[3762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3750 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933326531343565623430343262333339336137623665326336636333 Dec 16 12:45:47.078000 audit: BPF prog-id=164 op=LOAD Dec 16 12:45:47.078000 audit[3762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3750 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933326531343565623430343262333339336137623665326336636333 Dec 16 12:45:47.078000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:45:47.078000 audit[3762]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3750 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933326531343565623430343262333339336137623665326336636333 Dec 16 12:45:47.078000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:45:47.078000 audit[3762]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3750 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933326531343565623430343262333339336137623665326336636333 Dec 16 12:45:47.078000 audit: BPF prog-id=165 op=LOAD Dec 16 12:45:47.078000 audit[3762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3750 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933326531343565623430343262333339336137623665326336636333 Dec 16 12:45:47.090706 containerd[2018]: time="2025-12-16T12:45:47.089942778Z" level=info msg="Container a4b3f13f10d5f89ba8282ff253399ff0a467a5e4de6ea1a1a315d91d4e322861: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:47.104504 containerd[2018]: time="2025-12-16T12:45:47.104461831Z" level=info msg="CreateContainer within sandbox \"fa5b7ea3daacea4587dc1f2b90192504c8d4f63fb9c08932579b0f0899e5cf86\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4b3f13f10d5f89ba8282ff253399ff0a467a5e4de6ea1a1a315d91d4e322861\"" Dec 16 12:45:47.106702 containerd[2018]: time="2025-12-16T12:45:47.106652598Z" level=info msg="StartContainer for \"a4b3f13f10d5f89ba8282ff253399ff0a467a5e4de6ea1a1a315d91d4e322861\"" Dec 16 12:45:47.107842 containerd[2018]: time="2025-12-16T12:45:47.107814483Z" level=info msg="connecting to shim a4b3f13f10d5f89ba8282ff253399ff0a467a5e4de6ea1a1a315d91d4e322861" address="unix:///run/containerd/s/a84363c12058deb550d2213b34e5cecca433ebd5155c0513fa446fd478d7a72c" protocol=ttrpc version=3 Dec 16 12:45:47.108756 containerd[2018]: time="2025-12-16T12:45:47.105602260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-d4wjh,Uid:3d7ae9eb-6412-4816-9de4-11b7fb7682d8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"932e145eb4042b3393a7b6e2c6cc354fd37342351ca70c3335f87471d4e2f838\"" Dec 16 12:45:47.114209 containerd[2018]: time="2025-12-16T12:45:47.114022508Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:45:47.130126 systemd[1]: Started cri-containerd-a4b3f13f10d5f89ba8282ff253399ff0a467a5e4de6ea1a1a315d91d4e322861.scope - libcontainer container a4b3f13f10d5f89ba8282ff253399ff0a467a5e4de6ea1a1a315d91d4e322861. Dec 16 12:45:47.161000 audit: BPF prog-id=166 op=LOAD Dec 16 12:45:47.161000 audit[3797]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3712 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623366313366313064356638396261383238326666323533333939 Dec 16 12:45:47.161000 audit: BPF prog-id=167 op=LOAD Dec 16 12:45:47.161000 audit[3797]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3712 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623366313366313064356638396261383238326666323533333939 Dec 16 12:45:47.161000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:45:47.161000 audit[3797]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3712 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623366313366313064356638396261383238326666323533333939 Dec 16 12:45:47.161000 audit: BPF prog-id=166 op=UNLOAD Dec 16 12:45:47.161000 audit[3797]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3712 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623366313366313064356638396261383238326666323533333939 Dec 16 12:45:47.161000 audit: BPF prog-id=168 op=LOAD Dec 16 12:45:47.161000 audit[3797]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3712 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134623366313366313064356638396261383238326666323533333939 Dec 16 12:45:47.183187 containerd[2018]: time="2025-12-16T12:45:47.183055522Z" level=info msg="StartContainer for \"a4b3f13f10d5f89ba8282ff253399ff0a467a5e4de6ea1a1a315d91d4e322861\" returns successfully" Dec 16 12:45:47.280000 audit[3859]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3859 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.280000 audit[3859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc0c22120 a2=0 a3=1 items=0 ppid=3809 pid=3859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:45:47.281000 audit[3862]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3862 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.281000 audit[3862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb6c9760 a2=0 a3=1 items=0 ppid=3809 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:45:47.283000 audit[3863]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.283000 audit[3863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdd0c3380 a2=0 a3=1 items=0 ppid=3809 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:45:47.285000 audit[3865]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=3865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.285000 audit[3865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd08b0ba0 a2=0 a3=1 items=0 ppid=3809 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.285000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:45:47.288000 audit[3866]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.288000 audit[3866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd01ce510 a2=0 a3=1 items=0 ppid=3809 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:45:47.290000 audit[3867]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=3867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.290000 audit[3867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffded7a0b0 a2=0 a3=1 items=0 ppid=3809 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.290000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:45:47.386000 audit[3868]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.386000 audit[3868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffce9ebf10 a2=0 a3=1 items=0 ppid=3809 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.386000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:45:47.389000 audit[3870]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.389000 audit[3870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffa63ebc0 a2=0 a3=1 items=0 ppid=3809 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.389000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:45:47.392000 audit[3873]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.392000 audit[3873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe14f5e20 a2=0 a3=1 items=0 ppid=3809 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:45:47.393000 audit[3874]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.393000 audit[3874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea346e60 a2=0 a3=1 items=0 ppid=3809 pid=3874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:45:47.396000 audit[3876]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.396000 audit[3876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc6fadd90 a2=0 a3=1 items=0 ppid=3809 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:45:47.397000 audit[3877]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.397000 audit[3877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff84cd890 a2=0 a3=1 items=0 ppid=3809 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:45:47.400000 audit[3879]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.400000 audit[3879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffddde6fd0 a2=0 a3=1 items=0 ppid=3809 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.400000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:45:47.404000 audit[3882]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.404000 audit[3882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff5ad41b0 a2=0 a3=1 items=0 ppid=3809 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:45:47.405000 audit[3883]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.405000 audit[3883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4829170 a2=0 a3=1 items=0 ppid=3809 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:45:47.408000 audit[3885]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.408000 audit[3885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe0fb0940 a2=0 a3=1 items=0 ppid=3809 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:45:47.409000 audit[3886]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.409000 audit[3886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff68e8360 a2=0 a3=1 items=0 ppid=3809 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:45:47.413000 audit[3888]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.413000 audit[3888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc510d5d0 a2=0 a3=1 items=0 ppid=3809 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:45:47.416000 audit[3891]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.416000 audit[3891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffe71e0c0 a2=0 a3=1 items=0 ppid=3809 pid=3891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:45:47.419000 audit[3894]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.419000 audit[3894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff7280cf0 a2=0 a3=1 items=0 ppid=3809 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:45:47.420000 audit[3895]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.420000 audit[3895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff1cd9f60 a2=0 a3=1 items=0 ppid=3809 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:45:47.423000 audit[3897]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.423000 audit[3897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcf5dcc70 a2=0 a3=1 items=0 ppid=3809 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:47.426000 audit[3900]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.426000 audit[3900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff0c90d20 a2=0 a3=1 items=0 ppid=3809 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:47.427000 audit[3901]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.427000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5604b70 a2=0 a3=1 items=0 ppid=3809 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:45:47.429000 audit[3903]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:47.429000 audit[3903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd35db510 a2=0 a3=1 items=0 ppid=3809 pid=3903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:45:47.482000 audit[3909]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:47.482000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff3a18640 a2=0 a3=1 items=0 ppid=3809 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:47.490000 audit[3909]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:47.490000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff3a18640 a2=0 a3=1 items=0 ppid=3809 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:47.492000 audit[3914]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.492000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff7970df0 a2=0 a3=1 items=0 ppid=3809 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.492000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:45:47.494000 audit[3916]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.494000 audit[3916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd4c472a0 a2=0 a3=1 items=0 ppid=3809 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:45:47.498000 audit[3919]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.498000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffed174be0 a2=0 a3=1 items=0 ppid=3809 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:45:47.499000 audit[3920]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.499000 audit[3920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0bf7280 a2=0 a3=1 items=0 ppid=3809 pid=3920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:45:47.502000 audit[3922]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.502000 audit[3922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffccec8a00 a2=0 a3=1 items=0 ppid=3809 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:45:47.503000 audit[3923]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.503000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffead8dac0 a2=0 a3=1 items=0 ppid=3809 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.503000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:45:47.505000 audit[3925]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.505000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff83047b0 a2=0 a3=1 items=0 ppid=3809 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:45:47.509000 audit[3928]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.509000 audit[3928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff9b1c320 a2=0 a3=1 items=0 ppid=3809 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:45:47.510000 audit[3929]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.510000 audit[3929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe81e5250 a2=0 a3=1 items=0 ppid=3809 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:45:47.513000 audit[3931]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.513000 audit[3931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc8dc9680 a2=0 a3=1 items=0 ppid=3809 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:45:47.514000 audit[3932]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.514000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff338ccc0 a2=0 a3=1 items=0 ppid=3809 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.514000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:45:47.517000 audit[3934]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.517000 audit[3934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd9f652d0 a2=0 a3=1 items=0 ppid=3809 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.517000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:45:47.520000 audit[3937]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.520000 audit[3937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffef5ef950 a2=0 a3=1 items=0 ppid=3809 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:45:47.523000 audit[3940]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.523000 audit[3940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcae57190 a2=0 a3=1 items=0 ppid=3809 pid=3940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.523000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:45:47.524000 audit[3941]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.524000 audit[3941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1513b50 a2=0 a3=1 items=0 ppid=3809 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.524000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:45:47.526000 audit[3943]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.526000 audit[3943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff1e76410 a2=0 a3=1 items=0 ppid=3809 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:47.529000 audit[3946]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.529000 audit[3946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc95331d0 a2=0 a3=1 items=0 ppid=3809 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:45:47.530000 audit[3947]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.530000 audit[3947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeea0e9a0 a2=0 a3=1 items=0 ppid=3809 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:45:47.533000 audit[3949]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.533000 audit[3949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd66e4640 a2=0 a3=1 items=0 ppid=3809 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.533000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:45:47.534000 audit[3950]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.534000 audit[3950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2f73d30 a2=0 a3=1 items=0 ppid=3809 pid=3950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:45:47.536000 audit[3952]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.536000 audit[3952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffda753d50 a2=0 a3=1 items=0 ppid=3809 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.536000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:47.539000 audit[3955]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:47.539000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc8a1ae90 a2=0 a3=1 items=0 ppid=3809 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:47.545000 audit[3957]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:45:47.545000 audit[3957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff81e64d0 a2=0 a3=1 items=0 ppid=3809 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.545000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:47.545000 audit[3957]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:45:47.545000 audit[3957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff81e64d0 a2=0 a3=1 items=0 ppid=3809 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:47.545000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:48.807805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159742183.mount: Deactivated successfully. Dec 16 12:45:49.427268 containerd[2018]: time="2025-12-16T12:45:49.426708002Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:49.429810 containerd[2018]: time="2025-12-16T12:45:49.429630912Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:45:49.432850 containerd[2018]: time="2025-12-16T12:45:49.432808215Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:49.437340 containerd[2018]: time="2025-12-16T12:45:49.437272199Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:49.437703 containerd[2018]: time="2025-12-16T12:45:49.437675908Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.323562518s" Dec 16 12:45:49.437747 containerd[2018]: time="2025-12-16T12:45:49.437706693Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:45:49.444946 containerd[2018]: time="2025-12-16T12:45:49.444898630Z" level=info msg="CreateContainer within sandbox \"932e145eb4042b3393a7b6e2c6cc354fd37342351ca70c3335f87471d4e2f838\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:45:49.464270 containerd[2018]: time="2025-12-16T12:45:49.464219582Z" level=info msg="Container 560181e9fb925997001979db8c4526a545d2204b6beaf663635e6b7748c6fd7e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:49.465683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046258909.mount: Deactivated successfully. Dec 16 12:45:49.468478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292709997.mount: Deactivated successfully. Dec 16 12:45:49.477638 containerd[2018]: time="2025-12-16T12:45:49.477583006Z" level=info msg="CreateContainer within sandbox \"932e145eb4042b3393a7b6e2c6cc354fd37342351ca70c3335f87471d4e2f838\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"560181e9fb925997001979db8c4526a545d2204b6beaf663635e6b7748c6fd7e\"" Dec 16 12:45:49.478616 containerd[2018]: time="2025-12-16T12:45:49.478501219Z" level=info msg="StartContainer for \"560181e9fb925997001979db8c4526a545d2204b6beaf663635e6b7748c6fd7e\"" Dec 16 12:45:49.479995 containerd[2018]: time="2025-12-16T12:45:49.479962795Z" level=info msg="connecting to shim 560181e9fb925997001979db8c4526a545d2204b6beaf663635e6b7748c6fd7e" address="unix:///run/containerd/s/89e75abb6549fe81807a04857d54fde711cb1dff5386af19ab916b698d7a6d28" protocol=ttrpc version=3 Dec 16 12:45:49.500138 systemd[1]: Started cri-containerd-560181e9fb925997001979db8c4526a545d2204b6beaf663635e6b7748c6fd7e.scope - libcontainer container 560181e9fb925997001979db8c4526a545d2204b6beaf663635e6b7748c6fd7e. Dec 16 12:45:49.509000 audit: BPF prog-id=169 op=LOAD Dec 16 12:45:49.509000 audit: BPF prog-id=170 op=LOAD Dec 16 12:45:49.509000 audit[3966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3750 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536303138316539666239323539393730303139373964623863343532 Dec 16 12:45:49.509000 audit: BPF prog-id=170 op=UNLOAD Dec 16 12:45:49.509000 audit[3966]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3750 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536303138316539666239323539393730303139373964623863343532 Dec 16 12:45:49.509000 audit: BPF prog-id=171 op=LOAD Dec 16 12:45:49.509000 audit[3966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3750 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536303138316539666239323539393730303139373964623863343532 Dec 16 12:45:49.509000 audit: BPF prog-id=172 op=LOAD Dec 16 12:45:49.509000 audit[3966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3750 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536303138316539666239323539393730303139373964623863343532 Dec 16 12:45:49.509000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:45:49.509000 audit[3966]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3750 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536303138316539666239323539393730303139373964623863343532 Dec 16 12:45:49.509000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:45:49.509000 audit[3966]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3750 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536303138316539666239323539393730303139373964623863343532 Dec 16 12:45:49.509000 audit: BPF prog-id=173 op=LOAD Dec 16 12:45:49.509000 audit[3966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3750 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536303138316539666239323539393730303139373964623863343532 Dec 16 12:45:49.532117 containerd[2018]: time="2025-12-16T12:45:49.531748828Z" level=info msg="StartContainer for \"560181e9fb925997001979db8c4526a545d2204b6beaf663635e6b7748c6fd7e\" returns successfully" Dec 16 12:45:49.624122 kubelet[3651]: I1216 12:45:49.624061 3651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9tfks" podStartSLOduration=3.624040818 podStartE2EDuration="3.624040818s" podCreationTimestamp="2025-12-16 12:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:47.620450087 +0000 UTC m=+6.154153653" watchObservedRunningTime="2025-12-16 12:45:49.624040818 +0000 UTC m=+8.157744384" Dec 16 12:45:50.141355 kubelet[3651]: I1216 12:45:50.140819 3651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-d4wjh" podStartSLOduration=1.8137653569999999 podStartE2EDuration="4.140804147s" podCreationTimestamp="2025-12-16 12:45:46 +0000 UTC" firstStartedPulling="2025-12-16 12:45:47.111441656 +0000 UTC m=+5.645145222" lastFinishedPulling="2025-12-16 12:45:49.438480446 +0000 UTC m=+7.972184012" observedRunningTime="2025-12-16 12:45:49.625503817 +0000 UTC m=+8.159207383" watchObservedRunningTime="2025-12-16 12:45:50.140804147 +0000 UTC m=+8.674507713" Dec 16 12:45:54.742222 sudo[2515]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:54.742000 audit[2515]: USER_END pid=2515 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:54.746718 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:45:54.746832 kernel: audit: type=1106 audit(1765889154.742:529): pid=2515 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:54.742000 audit[2515]: CRED_DISP pid=2515 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:54.783532 kernel: audit: type=1104 audit(1765889154.742:530): pid=2515 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:54.822324 sshd[2514]: Connection closed by 10.200.16.10 port 35376 Dec 16 12:45:54.822000 audit[2493]: USER_END pid=2493 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:54.823167 sshd-session[2493]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:54.822000 audit[2493]: CRED_DISP pid=2493 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:54.847190 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:35376.service: Deactivated successfully. Dec 16 12:45:54.853121 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:45:54.853633 systemd[1]: session-9.scope: Consumed 4.296s CPU time, 216.6M memory peak. Dec 16 12:45:54.862721 kernel: audit: type=1106 audit(1765889154.822:531): pid=2493 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:54.862851 kernel: audit: type=1104 audit(1765889154.822:532): pid=2493 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:54.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.38:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:54.880412 kernel: audit: type=1131 audit(1765889154.848:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.38:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:54.881804 systemd-logind[1983]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:45:54.883727 systemd-logind[1983]: Removed session 9. Dec 16 12:45:56.243000 audit[4046]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:56.243000 audit[4046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd19c9630 a2=0 a3=1 items=0 ppid=3809 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:56.277180 kernel: audit: type=1325 audit(1765889156.243:534): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:56.277324 kernel: audit: type=1300 audit(1765889156.243:534): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd19c9630 a2=0 a3=1 items=0 ppid=3809 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:56.243000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:56.288060 kernel: audit: type=1327 audit(1765889156.243:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:56.291000 audit[4046]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:56.291000 audit[4046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd19c9630 a2=0 a3=1 items=0 ppid=3809 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:56.331725 kernel: audit: type=1325 audit(1765889156.291:535): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:56.331865 kernel: audit: type=1300 audit(1765889156.291:535): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd19c9630 a2=0 a3=1 items=0 ppid=3809 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:56.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:56.352000 audit[4048]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:56.352000 audit[4048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc3251970 a2=0 a3=1 items=0 ppid=3809 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:56.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:56.359000 audit[4048]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:56.359000 audit[4048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc3251970 a2=0 a3=1 items=0 ppid=3809 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:56.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:58.649000 audit[4050]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:58.649000 audit[4050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc0e7a7f0 a2=0 a3=1 items=0 ppid=3809 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:58.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:58.660000 audit[4050]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:58.660000 audit[4050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc0e7a7f0 a2=0 a3=1 items=0 ppid=3809 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:58.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:58.684000 audit[4052]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:58.684000 audit[4052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffff5fd0e0 a2=0 a3=1 items=0 ppid=3809 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:58.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:58.691000 audit[4052]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:58.691000 audit[4052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff5fd0e0 a2=0 a3=1 items=0 ppid=3809 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:58.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:59.713000 audit[4055]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:59.713000 audit[4055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe84b56d0 a2=0 a3=1 items=0 ppid=3809 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:45:59.717000 audit[4055]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:45:59.717000 audit[4055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe84b56d0 a2=0 a3=1 items=0 ppid=3809 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:01.088000 audit[4057]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:01.093083 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 16 12:46:01.093215 kernel: audit: type=1325 audit(1765889161.088:544): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:01.088000 audit[4057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc3e21f60 a2=0 a3=1 items=0 ppid=3809 pid=4057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:01.139175 kernel: audit: type=1300 audit(1765889161.088:544): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc3e21f60 a2=0 a3=1 items=0 ppid=3809 pid=4057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.139312 kernel: audit: type=1327 audit(1765889161.088:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:01.105000 audit[4057]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:01.154535 kernel: audit: type=1325 audit(1765889161.105:545): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4057 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:01.105000 audit[4057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc3e21f60 a2=0 a3=1 items=0 ppid=3809 pid=4057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.179447 kernel: audit: type=1300 audit(1765889161.105:545): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc3e21f60 a2=0 a3=1 items=0 ppid=3809 pid=4057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.105000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:01.185134 systemd[1]: Created slice kubepods-besteffort-pod8fa4a94c_1c41_42b8_aa2a_453425c20b7c.slice - libcontainer container kubepods-besteffort-pod8fa4a94c_1c41_42b8_aa2a_453425c20b7c.slice. Dec 16 12:46:01.187838 kubelet[3651]: I1216 12:46:01.187524 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fa4a94c-1c41-42b8-aa2a-453425c20b7c-tigera-ca-bundle\") pod \"calico-typha-747db5bd9d-dxjv5\" (UID: \"8fa4a94c-1c41-42b8-aa2a-453425c20b7c\") " pod="calico-system/calico-typha-747db5bd9d-dxjv5" Dec 16 12:46:01.187838 kubelet[3651]: I1216 12:46:01.187571 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8fa4a94c-1c41-42b8-aa2a-453425c20b7c-typha-certs\") pod \"calico-typha-747db5bd9d-dxjv5\" (UID: \"8fa4a94c-1c41-42b8-aa2a-453425c20b7c\") " pod="calico-system/calico-typha-747db5bd9d-dxjv5" Dec 16 12:46:01.187838 kubelet[3651]: I1216 12:46:01.187613 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dcxz\" (UniqueName: \"kubernetes.io/projected/8fa4a94c-1c41-42b8-aa2a-453425c20b7c-kube-api-access-4dcxz\") pod \"calico-typha-747db5bd9d-dxjv5\" (UID: \"8fa4a94c-1c41-42b8-aa2a-453425c20b7c\") " pod="calico-system/calico-typha-747db5bd9d-dxjv5" Dec 16 12:46:01.192748 kernel: audit: type=1327 audit(1765889161.105:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:01.325409 systemd[1]: Created slice kubepods-besteffort-pod830aa81d_0ba1_458f_a255_ecdab0e85578.slice - libcontainer container kubepods-besteffort-pod830aa81d_0ba1_458f_a255_ecdab0e85578.slice. Dec 16 12:46:01.489844 kubelet[3651]: I1216 12:46:01.488966 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/830aa81d-0ba1-458f-a255-ecdab0e85578-node-certs\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490199 kubelet[3651]: I1216 12:46:01.489789 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/830aa81d-0ba1-458f-a255-ecdab0e85578-policysync\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490199 kubelet[3651]: I1216 12:46:01.490084 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/830aa81d-0ba1-458f-a255-ecdab0e85578-var-run-calico\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490199 kubelet[3651]: I1216 12:46:01.490148 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/830aa81d-0ba1-458f-a255-ecdab0e85578-cni-bin-dir\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490199 kubelet[3651]: I1216 12:46:01.490169 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/830aa81d-0ba1-458f-a255-ecdab0e85578-flexvol-driver-host\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490460 kubelet[3651]: I1216 12:46:01.490189 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/830aa81d-0ba1-458f-a255-ecdab0e85578-cni-net-dir\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490460 kubelet[3651]: I1216 12:46:01.490324 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/830aa81d-0ba1-458f-a255-ecdab0e85578-xtables-lock\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490460 kubelet[3651]: I1216 12:46:01.490351 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/830aa81d-0ba1-458f-a255-ecdab0e85578-lib-modules\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490460 kubelet[3651]: I1216 12:46:01.490364 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830aa81d-0ba1-458f-a255-ecdab0e85578-tigera-ca-bundle\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490460 kubelet[3651]: I1216 12:46:01.490375 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/830aa81d-0ba1-458f-a255-ecdab0e85578-cni-log-dir\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490579 kubelet[3651]: I1216 12:46:01.490385 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/830aa81d-0ba1-458f-a255-ecdab0e85578-var-lib-calico\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.490579 kubelet[3651]: I1216 12:46:01.490398 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9bp\" (UniqueName: \"kubernetes.io/projected/830aa81d-0ba1-458f-a255-ecdab0e85578-kube-api-access-mw9bp\") pod \"calico-node-tvmc8\" (UID: \"830aa81d-0ba1-458f-a255-ecdab0e85578\") " pod="calico-system/calico-node-tvmc8" Dec 16 12:46:01.500857 kubelet[3651]: E1216 12:46:01.500117 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:01.502596 containerd[2018]: time="2025-12-16T12:46:01.502302462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747db5bd9d-dxjv5,Uid:8fa4a94c-1c41-42b8-aa2a-453425c20b7c,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:01.559829 containerd[2018]: time="2025-12-16T12:46:01.559304512Z" level=info msg="connecting to shim d264bef1f0270fe2d9f783021d0fbbdc671f90bfc4d8ee482ee34baced1efe43" address="unix:///run/containerd/s/7209234d585f55b91b49a766b645375cdc556ff9bf1ffa761bd958ec57a898b9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:01.601582 systemd[1]: Started cri-containerd-d264bef1f0270fe2d9f783021d0fbbdc671f90bfc4d8ee482ee34baced1efe43.scope - libcontainer container d264bef1f0270fe2d9f783021d0fbbdc671f90bfc4d8ee482ee34baced1efe43. Dec 16 12:46:01.610296 kubelet[3651]: E1216 12:46:01.610260 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.610612 kubelet[3651]: W1216 12:46:01.610422 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.610612 kubelet[3651]: E1216 12:46:01.610462 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.624053 kubelet[3651]: E1216 12:46:01.624019 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.624334 kubelet[3651]: W1216 12:46:01.624260 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.624334 kubelet[3651]: E1216 12:46:01.624293 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.631718 containerd[2018]: time="2025-12-16T12:46:01.631671188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tvmc8,Uid:830aa81d-0ba1-458f-a255-ecdab0e85578,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:01.677751 containerd[2018]: time="2025-12-16T12:46:01.677340376Z" level=info msg="connecting to shim 8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216" address="unix:///run/containerd/s/0ce5851f4fa2941d87c8688522bd9a92b17dbcfba7d6c2d331088b1d8b2a3604" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:01.692676 kubelet[3651]: E1216 12:46:01.692639 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.692676 kubelet[3651]: W1216 12:46:01.692666 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.692862 kubelet[3651]: E1216 12:46:01.692686 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.692862 kubelet[3651]: I1216 12:46:01.692723 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp5p2\" (UniqueName: \"kubernetes.io/projected/180cd658-ddf7-4444-81e2-acfbf19611d5-kube-api-access-tp5p2\") pod \"csi-node-driver-mj8nq\" (UID: \"180cd658-ddf7-4444-81e2-acfbf19611d5\") " pod="calico-system/csi-node-driver-mj8nq" Dec 16 12:46:01.694032 kubelet[3651]: E1216 12:46:01.692915 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.694032 kubelet[3651]: W1216 12:46:01.692925 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.694032 kubelet[3651]: E1216 12:46:01.692936 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.694032 kubelet[3651]: I1216 12:46:01.693932 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/180cd658-ddf7-4444-81e2-acfbf19611d5-varrun\") pod \"csi-node-driver-mj8nq\" (UID: \"180cd658-ddf7-4444-81e2-acfbf19611d5\") " pod="calico-system/csi-node-driver-mj8nq" Dec 16 12:46:01.694391 kubelet[3651]: E1216 12:46:01.694238 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.694391 kubelet[3651]: W1216 12:46:01.694252 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.694391 kubelet[3651]: E1216 12:46:01.694264 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.694391 kubelet[3651]: I1216 12:46:01.694287 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/180cd658-ddf7-4444-81e2-acfbf19611d5-registration-dir\") pod \"csi-node-driver-mj8nq\" (UID: \"180cd658-ddf7-4444-81e2-acfbf19611d5\") " pod="calico-system/csi-node-driver-mj8nq" Dec 16 12:46:01.695083 kubelet[3651]: E1216 12:46:01.694929 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.695083 kubelet[3651]: W1216 12:46:01.694950 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.695083 kubelet[3651]: E1216 12:46:01.694962 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.695344 kubelet[3651]: E1216 12:46:01.695246 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.695344 kubelet[3651]: W1216 12:46:01.695259 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.695344 kubelet[3651]: E1216 12:46:01.695268 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.695552 kubelet[3651]: E1216 12:46:01.695541 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.697076 kubelet[3651]: W1216 12:46:01.696916 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.697076 kubelet[3651]: E1216 12:46:01.696945 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.697325 kubelet[3651]: E1216 12:46:01.697226 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.697325 kubelet[3651]: W1216 12:46:01.697238 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.697325 kubelet[3651]: E1216 12:46:01.697249 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.697651 kubelet[3651]: E1216 12:46:01.697580 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.697651 kubelet[3651]: W1216 12:46:01.697594 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.697651 kubelet[3651]: E1216 12:46:01.697604 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.697734 kubelet[3651]: I1216 12:46:01.697657 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/180cd658-ddf7-4444-81e2-acfbf19611d5-socket-dir\") pod \"csi-node-driver-mj8nq\" (UID: \"180cd658-ddf7-4444-81e2-acfbf19611d5\") " pod="calico-system/csi-node-driver-mj8nq" Dec 16 12:46:01.699976 kubelet[3651]: E1216 12:46:01.699127 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.699976 kubelet[3651]: W1216 12:46:01.699149 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.699976 kubelet[3651]: E1216 12:46:01.699167 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.699976 kubelet[3651]: E1216 12:46:01.699317 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.699976 kubelet[3651]: W1216 12:46:01.699324 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.699976 kubelet[3651]: E1216 12:46:01.699334 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.699976 kubelet[3651]: E1216 12:46:01.699440 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.699976 kubelet[3651]: W1216 12:46:01.699445 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.699976 kubelet[3651]: E1216 12:46:01.699451 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.700183 kubelet[3651]: I1216 12:46:01.699538 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/180cd658-ddf7-4444-81e2-acfbf19611d5-kubelet-dir\") pod \"csi-node-driver-mj8nq\" (UID: \"180cd658-ddf7-4444-81e2-acfbf19611d5\") " pod="calico-system/csi-node-driver-mj8nq" Dec 16 12:46:01.700183 kubelet[3651]: E1216 12:46:01.699632 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.700183 kubelet[3651]: W1216 12:46:01.699637 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.700183 kubelet[3651]: E1216 12:46:01.699645 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.700183 kubelet[3651]: E1216 12:46:01.699751 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.700183 kubelet[3651]: W1216 12:46:01.699757 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.700183 kubelet[3651]: E1216 12:46:01.699764 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.700183 kubelet[3651]: E1216 12:46:01.699878 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.700183 kubelet[3651]: W1216 12:46:01.699884 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.700309 kubelet[3651]: E1216 12:46:01.699890 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.700420 kubelet[3651]: E1216 12:46:01.700376 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.700420 kubelet[3651]: W1216 12:46:01.700389 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.700420 kubelet[3651]: E1216 12:46:01.700400 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.716075 systemd[1]: Started cri-containerd-8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216.scope - libcontainer container 8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216. Dec 16 12:46:01.727000 audit: BPF prog-id=174 op=LOAD Dec 16 12:46:01.728000 audit: BPF prog-id=175 op=LOAD Dec 16 12:46:01.740289 kernel: audit: type=1334 audit(1765889161.727:546): prog-id=174 op=LOAD Dec 16 12:46:01.740422 kernel: audit: type=1334 audit(1765889161.728:547): prog-id=175 op=LOAD Dec 16 12:46:01.728000 audit[4123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4112 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.761794 kernel: audit: type=1300 audit(1765889161.728:547): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4112 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653734386332363637623130383639663831356538383236633435 Dec 16 12:46:01.781375 kernel: audit: type=1327 audit(1765889161.728:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653734386332363637623130383639663831356538383236633435 Dec 16 12:46:01.732000 audit: BPF prog-id=175 op=UNLOAD Dec 16 12:46:01.732000 audit[4123]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653734386332363637623130383639663831356538383236633435 Dec 16 12:46:01.732000 audit: BPF prog-id=176 op=LOAD Dec 16 12:46:01.732000 audit[4123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4112 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653734386332363637623130383639663831356538383236633435 Dec 16 12:46:01.733000 audit: BPF prog-id=177 op=LOAD Dec 16 12:46:01.733000 audit[4123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4112 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653734386332363637623130383639663831356538383236633435 Dec 16 12:46:01.733000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:46:01.733000 audit[4123]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653734386332363637623130383639663831356538383236633435 Dec 16 12:46:01.733000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:46:01.733000 audit[4123]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653734386332363637623130383639663831356538383236633435 Dec 16 12:46:01.733000 audit: BPF prog-id=178 op=LOAD Dec 16 12:46:01.733000 audit[4123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4112 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653734386332363637623130383639663831356538383236633435 Dec 16 12:46:01.743000 audit: BPF prog-id=179 op=LOAD Dec 16 12:46:01.743000 audit: BPF prog-id=180 op=LOAD Dec 16 12:46:01.743000 audit[4080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363462656631663032373066653264396637383330323164306662 Dec 16 12:46:01.743000 audit: BPF prog-id=180 op=UNLOAD Dec 16 12:46:01.743000 audit[4080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363462656631663032373066653264396637383330323164306662 Dec 16 12:46:01.743000 audit: BPF prog-id=181 op=LOAD Dec 16 12:46:01.743000 audit[4080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363462656631663032373066653264396637383330323164306662 Dec 16 12:46:01.743000 audit: BPF prog-id=182 op=LOAD Dec 16 12:46:01.743000 audit[4080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363462656631663032373066653264396637383330323164306662 Dec 16 12:46:01.743000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:46:01.743000 audit[4080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363462656631663032373066653264396637383330323164306662 Dec 16 12:46:01.743000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:46:01.743000 audit[4080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363462656631663032373066653264396637383330323164306662 Dec 16 12:46:01.743000 audit: BPF prog-id=183 op=LOAD Dec 16 12:46:01.743000 audit[4080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:01.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363462656631663032373066653264396637383330323164306662 Dec 16 12:46:01.793893 containerd[2018]: time="2025-12-16T12:46:01.793831407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tvmc8,Uid:830aa81d-0ba1-458f-a255-ecdab0e85578,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216\"" Dec 16 12:46:01.796202 containerd[2018]: time="2025-12-16T12:46:01.796163494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:46:01.801555 kubelet[3651]: E1216 12:46:01.801482 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.801957 kubelet[3651]: W1216 12:46:01.801707 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.801957 kubelet[3651]: E1216 12:46:01.801735 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.802934 kubelet[3651]: E1216 12:46:01.802762 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.802934 kubelet[3651]: W1216 12:46:01.802891 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.802934 kubelet[3651]: E1216 12:46:01.802911 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.803394 kubelet[3651]: E1216 12:46:01.803291 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.803553 kubelet[3651]: W1216 12:46:01.803467 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.803553 kubelet[3651]: E1216 12:46:01.803487 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.804233 kubelet[3651]: E1216 12:46:01.804090 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.804233 kubelet[3651]: W1216 12:46:01.804110 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.804233 kubelet[3651]: E1216 12:46:01.804122 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.804795 kubelet[3651]: E1216 12:46:01.804763 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.805057 kubelet[3651]: W1216 12:46:01.804984 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.805057 kubelet[3651]: E1216 12:46:01.805007 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.805624 kubelet[3651]: E1216 12:46:01.805487 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.805624 kubelet[3651]: W1216 12:46:01.805520 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.805768 kubelet[3651]: E1216 12:46:01.805532 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.806564 kubelet[3651]: E1216 12:46:01.806536 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.806819 kubelet[3651]: W1216 12:46:01.806741 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.807125 kubelet[3651]: E1216 12:46:01.806956 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.807644 kubelet[3651]: E1216 12:46:01.807576 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.807838 kubelet[3651]: W1216 12:46:01.807699 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.807838 kubelet[3651]: E1216 12:46:01.807745 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.808407 kubelet[3651]: E1216 12:46:01.808330 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.808407 kubelet[3651]: W1216 12:46:01.808343 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.808407 kubelet[3651]: E1216 12:46:01.808354 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.809023 kubelet[3651]: E1216 12:46:01.808991 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.809023 kubelet[3651]: W1216 12:46:01.809003 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.809220 kubelet[3651]: E1216 12:46:01.809013 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.809651 kubelet[3651]: E1216 12:46:01.809549 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.809651 kubelet[3651]: W1216 12:46:01.809562 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.809651 kubelet[3651]: E1216 12:46:01.809574 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.810798 kubelet[3651]: E1216 12:46:01.810661 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.811012 kubelet[3651]: W1216 12:46:01.810892 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.811012 kubelet[3651]: E1216 12:46:01.810915 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.811684 kubelet[3651]: E1216 12:46:01.811545 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.811684 kubelet[3651]: W1216 12:46:01.811558 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.811684 kubelet[3651]: E1216 12:46:01.811569 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.812602 kubelet[3651]: E1216 12:46:01.812528 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.812848 kubelet[3651]: W1216 12:46:01.812757 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.812848 kubelet[3651]: E1216 12:46:01.812776 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.813821 kubelet[3651]: E1216 12:46:01.813205 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.813821 kubelet[3651]: W1216 12:46:01.813236 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.813821 kubelet[3651]: E1216 12:46:01.813249 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.814338 kubelet[3651]: E1216 12:46:01.814259 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.814338 kubelet[3651]: W1216 12:46:01.814273 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.814338 kubelet[3651]: E1216 12:46:01.814284 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.814747 kubelet[3651]: E1216 12:46:01.814685 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.814747 kubelet[3651]: W1216 12:46:01.814697 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.814747 kubelet[3651]: E1216 12:46:01.814707 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.815545 kubelet[3651]: E1216 12:46:01.815495 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.815545 kubelet[3651]: W1216 12:46:01.815508 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.815545 kubelet[3651]: E1216 12:46:01.815519 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.815899 kubelet[3651]: E1216 12:46:01.815848 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.815939 kubelet[3651]: W1216 12:46:01.815865 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.815939 kubelet[3651]: E1216 12:46:01.815929 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.816253 kubelet[3651]: E1216 12:46:01.816185 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.816253 kubelet[3651]: W1216 12:46:01.816198 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.816253 kubelet[3651]: E1216 12:46:01.816211 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.816368 kubelet[3651]: E1216 12:46:01.816349 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.816368 kubelet[3651]: W1216 12:46:01.816361 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.816368 kubelet[3651]: E1216 12:46:01.816369 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.816523 kubelet[3651]: E1216 12:46:01.816511 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.816523 kubelet[3651]: W1216 12:46:01.816524 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.816572 kubelet[3651]: E1216 12:46:01.816531 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.816572 kubelet[3651]: E1216 12:46:01.816968 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.816572 kubelet[3651]: W1216 12:46:01.816979 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.816572 kubelet[3651]: E1216 12:46:01.816989 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.816572 kubelet[3651]: E1216 12:46:01.817161 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.816572 kubelet[3651]: W1216 12:46:01.817169 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.816572 kubelet[3651]: E1216 12:46:01.817179 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.818149 kubelet[3651]: E1216 12:46:01.818123 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.818149 kubelet[3651]: W1216 12:46:01.818145 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.818206 kubelet[3651]: E1216 12:46:01.818161 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.822882 kubelet[3651]: E1216 12:46:01.822786 3651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:01.823201 kubelet[3651]: W1216 12:46:01.823178 3651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:01.823321 kubelet[3651]: E1216 12:46:01.823307 3651 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:01.851009 containerd[2018]: time="2025-12-16T12:46:01.850958835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747db5bd9d-dxjv5,Uid:8fa4a94c-1c41-42b8-aa2a-453425c20b7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d264bef1f0270fe2d9f783021d0fbbdc671f90bfc4d8ee482ee34baced1efe43\"" Dec 16 12:46:02.184000 audit[4196]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:02.184000 audit[4196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffccbe1da0 a2=0 a3=1 items=0 ppid=3809 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.184000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:02.189000 audit[4196]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:02.189000 audit[4196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffccbe1da0 a2=0 a3=1 items=0 ppid=3809 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.189000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:03.107997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130605406.mount: Deactivated successfully. Dec 16 12:46:03.180925 containerd[2018]: time="2025-12-16T12:46:03.180793877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:03.182791 containerd[2018]: time="2025-12-16T12:46:03.182632756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4262566" Dec 16 12:46:03.185297 containerd[2018]: time="2025-12-16T12:46:03.185245883Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:03.188764 containerd[2018]: time="2025-12-16T12:46:03.188700619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:03.189019 containerd[2018]: time="2025-12-16T12:46:03.188990212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.392792613s" Dec 16 12:46:03.189093 containerd[2018]: time="2025-12-16T12:46:03.189024341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:46:03.193346 containerd[2018]: time="2025-12-16T12:46:03.193309686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:46:03.199257 containerd[2018]: time="2025-12-16T12:46:03.199211841Z" level=info msg="CreateContainer within sandbox \"8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:46:03.215214 containerd[2018]: time="2025-12-16T12:46:03.215132497Z" level=info msg="Container 02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:03.230353 containerd[2018]: time="2025-12-16T12:46:03.230214217Z" level=info msg="CreateContainer within sandbox \"8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def\"" Dec 16 12:46:03.231353 containerd[2018]: time="2025-12-16T12:46:03.231316754Z" level=info msg="StartContainer for \"02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def\"" Dec 16 12:46:03.234046 containerd[2018]: time="2025-12-16T12:46:03.234007627Z" level=info msg="connecting to shim 02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def" address="unix:///run/containerd/s/0ce5851f4fa2941d87c8688522bd9a92b17dbcfba7d6c2d331088b1d8b2a3604" protocol=ttrpc version=3 Dec 16 12:46:03.260146 systemd[1]: Started cri-containerd-02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def.scope - libcontainer container 02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def. Dec 16 12:46:03.319000 audit: BPF prog-id=184 op=LOAD Dec 16 12:46:03.319000 audit[4205]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4112 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:03.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333736653934313732326665396163623935363963616234646337 Dec 16 12:46:03.320000 audit: BPF prog-id=185 op=LOAD Dec 16 12:46:03.320000 audit[4205]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4112 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:03.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333736653934313732326665396163623935363963616234646337 Dec 16 12:46:03.320000 audit: BPF prog-id=185 op=UNLOAD Dec 16 12:46:03.320000 audit[4205]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:03.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333736653934313732326665396163623935363963616234646337 Dec 16 12:46:03.320000 audit: BPF prog-id=184 op=UNLOAD Dec 16 12:46:03.320000 audit[4205]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:03.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333736653934313732326665396163623935363963616234646337 Dec 16 12:46:03.320000 audit: BPF prog-id=186 op=LOAD Dec 16 12:46:03.320000 audit[4205]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4112 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:03.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032333736653934313732326665396163623935363963616234646337 Dec 16 12:46:03.366933 containerd[2018]: time="2025-12-16T12:46:03.366769659Z" level=info msg="StartContainer for \"02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def\" returns successfully" Dec 16 12:46:03.380265 systemd[1]: cri-containerd-02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def.scope: Deactivated successfully. Dec 16 12:46:03.383136 containerd[2018]: time="2025-12-16T12:46:03.383093216Z" level=info msg="received container exit event container_id:\"02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def\" id:\"02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def\" pid:4220 exited_at:{seconds:1765889163 nanos:382538423}" Dec 16 12:46:03.383000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:46:03.412918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02376e941722fe9acb9569cab4dc727b7d85db00acff7fb29a0290ebf61e8def-rootfs.mount: Deactivated successfully. Dec 16 12:46:03.566434 kubelet[3651]: E1216 12:46:03.565764 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:05.566802 kubelet[3651]: E1216 12:46:05.566513 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:05.885922 containerd[2018]: time="2025-12-16T12:46:05.885766870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:05.888111 containerd[2018]: time="2025-12-16T12:46:05.888055428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 12:46:05.891004 containerd[2018]: time="2025-12-16T12:46:05.890839864Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:05.894458 containerd[2018]: time="2025-12-16T12:46:05.894393451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:05.895290 containerd[2018]: time="2025-12-16T12:46:05.894963692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.701617605s" Dec 16 12:46:05.895290 containerd[2018]: time="2025-12-16T12:46:05.894997349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:46:05.897898 containerd[2018]: time="2025-12-16T12:46:05.897209208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:46:05.916147 containerd[2018]: time="2025-12-16T12:46:05.916075497Z" level=info msg="CreateContainer within sandbox \"d264bef1f0270fe2d9f783021d0fbbdc671f90bfc4d8ee482ee34baced1efe43\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:46:05.938631 containerd[2018]: time="2025-12-16T12:46:05.938587625Z" level=info msg="Container 8ecb9d8800e855f5052babb4ccc5c99d34254f6f8e3d6185b991d574657bea61: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:05.942219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873843069.mount: Deactivated successfully. Dec 16 12:46:05.955123 containerd[2018]: time="2025-12-16T12:46:05.955063546Z" level=info msg="CreateContainer within sandbox \"d264bef1f0270fe2d9f783021d0fbbdc671f90bfc4d8ee482ee34baced1efe43\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8ecb9d8800e855f5052babb4ccc5c99d34254f6f8e3d6185b991d574657bea61\"" Dec 16 12:46:05.956994 containerd[2018]: time="2025-12-16T12:46:05.956844504Z" level=info msg="StartContainer for \"8ecb9d8800e855f5052babb4ccc5c99d34254f6f8e3d6185b991d574657bea61\"" Dec 16 12:46:05.958997 containerd[2018]: time="2025-12-16T12:46:05.958957520Z" level=info msg="connecting to shim 8ecb9d8800e855f5052babb4ccc5c99d34254f6f8e3d6185b991d574657bea61" address="unix:///run/containerd/s/7209234d585f55b91b49a766b645375cdc556ff9bf1ffa761bd958ec57a898b9" protocol=ttrpc version=3 Dec 16 12:46:05.982361 systemd[1]: Started cri-containerd-8ecb9d8800e855f5052babb4ccc5c99d34254f6f8e3d6185b991d574657bea61.scope - libcontainer container 8ecb9d8800e855f5052babb4ccc5c99d34254f6f8e3d6185b991d574657bea61. Dec 16 12:46:05.995000 audit: BPF prog-id=187 op=LOAD Dec 16 12:46:05.995000 audit: BPF prog-id=188 op=LOAD Dec 16 12:46:05.995000 audit[4266]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400019c180 a2=98 a3=0 items=0 ppid=4069 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865636239643838303065383535663530353262616262346363633563 Dec 16 12:46:05.995000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:46:05.995000 audit[4266]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865636239643838303065383535663530353262616262346363633563 Dec 16 12:46:05.995000 audit: BPF prog-id=189 op=LOAD Dec 16 12:46:05.995000 audit[4266]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400019c3e8 a2=98 a3=0 items=0 ppid=4069 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865636239643838303065383535663530353262616262346363633563 Dec 16 12:46:05.995000 audit: BPF prog-id=190 op=LOAD Dec 16 12:46:05.995000 audit[4266]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400019c168 a2=98 a3=0 items=0 ppid=4069 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865636239643838303065383535663530353262616262346363633563 Dec 16 12:46:05.995000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:46:05.995000 audit[4266]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865636239643838303065383535663530353262616262346363633563 Dec 16 12:46:05.995000 audit: BPF prog-id=189 op=UNLOAD Dec 16 12:46:05.995000 audit[4266]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:05.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865636239643838303065383535663530353262616262346363633563 Dec 16 12:46:05.996000 audit: BPF prog-id=191 op=LOAD Dec 16 12:46:05.996000 audit[4266]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400019c648 a2=98 a3=0 items=0 ppid=4069 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:05.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865636239643838303065383535663530353262616262346363633563 Dec 16 12:46:06.028119 containerd[2018]: time="2025-12-16T12:46:06.028072558Z" level=info msg="StartContainer for \"8ecb9d8800e855f5052babb4ccc5c99d34254f6f8e3d6185b991d574657bea61\" returns successfully" Dec 16 12:46:06.674919 kubelet[3651]: I1216 12:46:06.674583 3651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-747db5bd9d-dxjv5" podStartSLOduration=1.6312474259999998 podStartE2EDuration="5.674569627s" podCreationTimestamp="2025-12-16 12:46:01 +0000 UTC" firstStartedPulling="2025-12-16 12:46:01.852730068 +0000 UTC m=+20.386433634" lastFinishedPulling="2025-12-16 12:46:05.896052269 +0000 UTC m=+24.429755835" observedRunningTime="2025-12-16 12:46:06.674397646 +0000 UTC m=+25.208101212" watchObservedRunningTime="2025-12-16 12:46:06.674569627 +0000 UTC m=+25.208273201" Dec 16 12:46:07.566346 kubelet[3651]: E1216 12:46:07.566293 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:07.662833 kubelet[3651]: I1216 12:46:07.662794 3651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:46:09.566042 kubelet[3651]: E1216 12:46:09.565504 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:09.598911 containerd[2018]: time="2025-12-16T12:46:09.598819272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:09.602428 containerd[2018]: time="2025-12-16T12:46:09.602369435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:46:09.605458 containerd[2018]: time="2025-12-16T12:46:09.605413119Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:09.611427 containerd[2018]: time="2025-12-16T12:46:09.611368195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.714125442s" Dec 16 12:46:09.611427 containerd[2018]: time="2025-12-16T12:46:09.611422260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:46:09.611601 containerd[2018]: time="2025-12-16T12:46:09.611559817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:09.619458 containerd[2018]: time="2025-12-16T12:46:09.619404445Z" level=info msg="CreateContainer within sandbox \"8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:46:09.638325 containerd[2018]: time="2025-12-16T12:46:09.638024447Z" level=info msg="Container e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:09.659546 containerd[2018]: time="2025-12-16T12:46:09.659444318Z" level=info msg="CreateContainer within sandbox \"8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2\"" Dec 16 12:46:09.662927 containerd[2018]: time="2025-12-16T12:46:09.661610519Z" level=info msg="StartContainer for \"e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2\"" Dec 16 12:46:09.664996 containerd[2018]: time="2025-12-16T12:46:09.664954308Z" level=info msg="connecting to shim e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2" address="unix:///run/containerd/s/0ce5851f4fa2941d87c8688522bd9a92b17dbcfba7d6c2d331088b1d8b2a3604" protocol=ttrpc version=3 Dec 16 12:46:09.689110 systemd[1]: Started cri-containerd-e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2.scope - libcontainer container e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2. Dec 16 12:46:09.734240 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 16 12:46:09.734412 kernel: audit: type=1334 audit(1765889169.729:578): prog-id=192 op=LOAD Dec 16 12:46:09.729000 audit: BPF prog-id=192 op=LOAD Dec 16 12:46:09.729000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4112 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.755314 kernel: audit: type=1300 audit(1765889169.729:578): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4112 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343935323365353361353737333936663535343338306334666235 Dec 16 12:46:09.773505 kernel: audit: type=1327 audit(1765889169.729:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343935323365353361353737333936663535343338306334666235 Dec 16 12:46:09.729000 audit: BPF prog-id=193 op=LOAD Dec 16 12:46:09.778225 kernel: audit: type=1334 audit(1765889169.729:579): prog-id=193 op=LOAD Dec 16 12:46:09.729000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4112 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.801186 kernel: audit: type=1300 audit(1765889169.729:579): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4112 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343935323365353361353737333936663535343338306334666235 Dec 16 12:46:09.818178 kernel: audit: type=1327 audit(1765889169.729:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343935323365353361353737333936663535343338306334666235 Dec 16 12:46:09.821198 kernel: audit: type=1334 audit(1765889169.733:580): prog-id=193 op=UNLOAD Dec 16 12:46:09.733000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:46:09.733000 audit[4312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.841271 kernel: audit: type=1300 audit(1765889169.733:580): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343935323365353361353737333936663535343338306334666235 Dec 16 12:46:09.860454 kernel: audit: type=1327 audit(1765889169.733:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343935323365353361353737333936663535343338306334666235 Dec 16 12:46:09.860601 kernel: audit: type=1334 audit(1765889169.733:581): prog-id=192 op=UNLOAD Dec 16 12:46:09.733000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:46:09.733000 audit[4312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343935323365353361353737333936663535343338306334666235 Dec 16 12:46:09.733000 audit: BPF prog-id=194 op=LOAD Dec 16 12:46:09.733000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4112 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343935323365353361353737333936663535343338306334666235 Dec 16 12:46:09.869995 containerd[2018]: time="2025-12-16T12:46:09.869941952Z" level=info msg="StartContainer for \"e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2\" returns successfully" Dec 16 12:46:10.938052 containerd[2018]: time="2025-12-16T12:46:10.937994211Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:46:10.941540 systemd[1]: cri-containerd-e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2.scope: Deactivated successfully. Dec 16 12:46:10.942046 systemd[1]: cri-containerd-e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2.scope: Consumed 334ms CPU time, 186.7M memory peak, 165.9M written to disk. Dec 16 12:46:10.944033 containerd[2018]: time="2025-12-16T12:46:10.943895613Z" level=info msg="received container exit event container_id:\"e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2\" id:\"e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2\" pid:4325 exited_at:{seconds:1765889170 nanos:943591756}" Dec 16 12:46:10.945000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:46:10.964049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2-rootfs.mount: Deactivated successfully. Dec 16 12:46:11.022102 kubelet[3651]: I1216 12:46:11.022021 3651 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:46:12.323117 systemd[1]: Created slice kubepods-besteffort-pod3efdc05c_5fa9_4b11_9892_e51e34365644.slice - libcontainer container kubepods-besteffort-pod3efdc05c_5fa9_4b11_9892_e51e34365644.slice. Dec 16 12:46:12.325475 containerd[2018]: time="2025-12-16T12:46:12.324400130Z" level=error msg="collecting metrics for e949523e53a577396f554380c4fb57410e25290ed6eee1611ec5f34741a88fb2" error="ttrpc: closed" Dec 16 12:46:12.338126 systemd[1]: Created slice kubepods-besteffort-pod180cd658_ddf7_4444_81e2_acfbf19611d5.slice - libcontainer container kubepods-besteffort-pod180cd658_ddf7_4444_81e2_acfbf19611d5.slice. Dec 16 12:46:12.349056 containerd[2018]: time="2025-12-16T12:46:12.348121790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mj8nq,Uid:180cd658-ddf7-4444-81e2-acfbf19611d5,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:12.369172 systemd[1]: Created slice kubepods-besteffort-pod64ea5eda_7471_4a46_a060_d20c3a27b031.slice - libcontainer container kubepods-besteffort-pod64ea5eda_7471_4a46_a060_d20c3a27b031.slice. Dec 16 12:46:12.388166 systemd[1]: Created slice kubepods-besteffort-pode19615e9_eae1_4066_8da3_a07943f9e95e.slice - libcontainer container kubepods-besteffort-pode19615e9_eae1_4066_8da3_a07943f9e95e.slice. Dec 16 12:46:12.391919 kubelet[3651]: I1216 12:46:12.391075 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31923b4e-d7e7-4360-b824-f299f181acf0-calico-apiserver-certs\") pod \"calico-apiserver-77b4c77bdf-sh2bd\" (UID: \"31923b4e-d7e7-4360-b824-f299f181acf0\") " pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" Dec 16 12:46:12.391919 kubelet[3651]: I1216 12:46:12.391121 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f12e3f4c-f803-496f-aa7c-d8e02fdb59ff-calico-apiserver-certs\") pod \"calico-apiserver-6fb597c7bd-fs82z\" (UID: \"f12e3f4c-f803-496f-aa7c-d8e02fdb59ff\") " pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" Dec 16 12:46:12.391919 kubelet[3651]: I1216 12:46:12.391135 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tfqz\" (UniqueName: \"kubernetes.io/projected/65a10ec6-3cde-4628-b3e7-3d3f8c440052-kube-api-access-9tfqz\") pod \"coredns-674b8bbfcf-t79gj\" (UID: \"65a10ec6-3cde-4628-b3e7-3d3f8c440052\") " pod="kube-system/coredns-674b8bbfcf-t79gj" Dec 16 12:46:12.391919 kubelet[3651]: I1216 12:46:12.391161 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e19615e9-eae1-4066-8da3-a07943f9e95e-calico-apiserver-certs\") pod \"calico-apiserver-77b4c77bdf-4jfxv\" (UID: \"e19615e9-eae1-4066-8da3-a07943f9e95e\") " pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" Dec 16 12:46:12.391919 kubelet[3651]: I1216 12:46:12.391172 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64ea5eda-7471-4a46-a060-d20c3a27b031-tigera-ca-bundle\") pod \"calico-kube-controllers-6ddd8d65c6-nlxv9\" (UID: \"64ea5eda-7471-4a46-a060-d20c3a27b031\") " pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" Dec 16 12:46:12.392322 kubelet[3651]: I1216 12:46:12.391183 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-679l2\" (UniqueName: \"kubernetes.io/projected/3efdc05c-5fa9-4b11-9892-e51e34365644-kube-api-access-679l2\") pod \"whisker-57ff568748-wzft8\" (UID: \"3efdc05c-5fa9-4b11-9892-e51e34365644\") " pod="calico-system/whisker-57ff568748-wzft8" Dec 16 12:46:12.392322 kubelet[3651]: I1216 12:46:12.391204 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828880ea-211a-4230-af15-b5fa7bcbc734-config\") pod \"goldmane-666569f655-llq4t\" (UID: \"828880ea-211a-4230-af15-b5fa7bcbc734\") " pod="calico-system/goldmane-666569f655-llq4t" Dec 16 12:46:12.392322 kubelet[3651]: I1216 12:46:12.391215 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/828880ea-211a-4230-af15-b5fa7bcbc734-goldmane-key-pair\") pod \"goldmane-666569f655-llq4t\" (UID: \"828880ea-211a-4230-af15-b5fa7bcbc734\") " pod="calico-system/goldmane-666569f655-llq4t" Dec 16 12:46:12.392322 kubelet[3651]: I1216 12:46:12.391224 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65a10ec6-3cde-4628-b3e7-3d3f8c440052-config-volume\") pod \"coredns-674b8bbfcf-t79gj\" (UID: \"65a10ec6-3cde-4628-b3e7-3d3f8c440052\") " pod="kube-system/coredns-674b8bbfcf-t79gj" Dec 16 12:46:12.392322 kubelet[3651]: I1216 12:46:12.391236 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvng6\" (UniqueName: \"kubernetes.io/projected/2abb66c6-2c22-4302-a249-0480581958a4-kube-api-access-tvng6\") pod \"coredns-674b8bbfcf-tvxxz\" (UID: \"2abb66c6-2c22-4302-a249-0480581958a4\") " pod="kube-system/coredns-674b8bbfcf-tvxxz" Dec 16 12:46:12.392405 kubelet[3651]: I1216 12:46:12.391273 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvdk\" (UniqueName: \"kubernetes.io/projected/828880ea-211a-4230-af15-b5fa7bcbc734-kube-api-access-sjvdk\") pod \"goldmane-666569f655-llq4t\" (UID: \"828880ea-211a-4230-af15-b5fa7bcbc734\") " pod="calico-system/goldmane-666569f655-llq4t" Dec 16 12:46:12.392405 kubelet[3651]: I1216 12:46:12.391284 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2g2x\" (UniqueName: \"kubernetes.io/projected/64ea5eda-7471-4a46-a060-d20c3a27b031-kube-api-access-c2g2x\") pod \"calico-kube-controllers-6ddd8d65c6-nlxv9\" (UID: \"64ea5eda-7471-4a46-a060-d20c3a27b031\") " pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" Dec 16 12:46:12.392405 kubelet[3651]: I1216 12:46:12.391296 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3efdc05c-5fa9-4b11-9892-e51e34365644-whisker-ca-bundle\") pod \"whisker-57ff568748-wzft8\" (UID: \"3efdc05c-5fa9-4b11-9892-e51e34365644\") " pod="calico-system/whisker-57ff568748-wzft8" Dec 16 12:46:12.392405 kubelet[3651]: I1216 12:46:12.391307 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2abb66c6-2c22-4302-a249-0480581958a4-config-volume\") pod \"coredns-674b8bbfcf-tvxxz\" (UID: \"2abb66c6-2c22-4302-a249-0480581958a4\") " pod="kube-system/coredns-674b8bbfcf-tvxxz" Dec 16 12:46:12.392405 kubelet[3651]: I1216 12:46:12.391319 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zgv\" (UniqueName: \"kubernetes.io/projected/e19615e9-eae1-4066-8da3-a07943f9e95e-kube-api-access-l9zgv\") pod \"calico-apiserver-77b4c77bdf-4jfxv\" (UID: \"e19615e9-eae1-4066-8da3-a07943f9e95e\") " pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" Dec 16 12:46:12.392488 kubelet[3651]: I1216 12:46:12.391328 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/828880ea-211a-4230-af15-b5fa7bcbc734-goldmane-ca-bundle\") pod \"goldmane-666569f655-llq4t\" (UID: \"828880ea-211a-4230-af15-b5fa7bcbc734\") " pod="calico-system/goldmane-666569f655-llq4t" Dec 16 12:46:12.392488 kubelet[3651]: I1216 12:46:12.391339 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjccj\" (UniqueName: \"kubernetes.io/projected/f12e3f4c-f803-496f-aa7c-d8e02fdb59ff-kube-api-access-bjccj\") pod \"calico-apiserver-6fb597c7bd-fs82z\" (UID: \"f12e3f4c-f803-496f-aa7c-d8e02fdb59ff\") " pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" Dec 16 12:46:12.392488 kubelet[3651]: I1216 12:46:12.391356 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9vv\" (UniqueName: \"kubernetes.io/projected/31923b4e-d7e7-4360-b824-f299f181acf0-kube-api-access-4s9vv\") pod \"calico-apiserver-77b4c77bdf-sh2bd\" (UID: \"31923b4e-d7e7-4360-b824-f299f181acf0\") " pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" Dec 16 12:46:12.392488 kubelet[3651]: I1216 12:46:12.391368 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3efdc05c-5fa9-4b11-9892-e51e34365644-whisker-backend-key-pair\") pod \"whisker-57ff568748-wzft8\" (UID: \"3efdc05c-5fa9-4b11-9892-e51e34365644\") " pod="calico-system/whisker-57ff568748-wzft8" Dec 16 12:46:12.400756 systemd[1]: Created slice kubepods-besteffort-pod828880ea_211a_4230_af15_b5fa7bcbc734.slice - libcontainer container kubepods-besteffort-pod828880ea_211a_4230_af15_b5fa7bcbc734.slice. Dec 16 12:46:12.415636 systemd[1]: Created slice kubepods-burstable-pod2abb66c6_2c22_4302_a249_0480581958a4.slice - libcontainer container kubepods-burstable-pod2abb66c6_2c22_4302_a249_0480581958a4.slice. Dec 16 12:46:12.427219 systemd[1]: Created slice kubepods-besteffort-pod31923b4e_d7e7_4360_b824_f299f181acf0.slice - libcontainer container kubepods-besteffort-pod31923b4e_d7e7_4360_b824_f299f181acf0.slice. Dec 16 12:46:12.433663 systemd[1]: Created slice kubepods-burstable-pod65a10ec6_3cde_4628_b3e7_3d3f8c440052.slice - libcontainer container kubepods-burstable-pod65a10ec6_3cde_4628_b3e7_3d3f8c440052.slice. Dec 16 12:46:12.445242 systemd[1]: Created slice kubepods-besteffort-podf12e3f4c_f803_496f_aa7c_d8e02fdb59ff.slice - libcontainer container kubepods-besteffort-podf12e3f4c_f803_496f_aa7c_d8e02fdb59ff.slice. Dec 16 12:46:12.459236 containerd[2018]: time="2025-12-16T12:46:12.459092642Z" level=error msg="Failed to destroy network for sandbox \"945ade5587db6f37c38fa0e4eeb44429e93b9238c0eb854b277de76097fc97b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.461263 systemd[1]: run-netns-cni\x2d12f5a8f6\x2d707a\x2d572b\x2dc61b\x2de54df534e1f4.mount: Deactivated successfully. Dec 16 12:46:12.466411 containerd[2018]: time="2025-12-16T12:46:12.466277701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mj8nq,Uid:180cd658-ddf7-4444-81e2-acfbf19611d5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"945ade5587db6f37c38fa0e4eeb44429e93b9238c0eb854b277de76097fc97b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.466799 kubelet[3651]: E1216 12:46:12.466753 3651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"945ade5587db6f37c38fa0e4eeb44429e93b9238c0eb854b277de76097fc97b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.467419 kubelet[3651]: E1216 12:46:12.466965 3651 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"945ade5587db6f37c38fa0e4eeb44429e93b9238c0eb854b277de76097fc97b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mj8nq" Dec 16 12:46:12.467419 kubelet[3651]: E1216 12:46:12.466991 3651 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"945ade5587db6f37c38fa0e4eeb44429e93b9238c0eb854b277de76097fc97b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mj8nq" Dec 16 12:46:12.467419 kubelet[3651]: E1216 12:46:12.467063 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"945ade5587db6f37c38fa0e4eeb44429e93b9238c0eb854b277de76097fc97b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:12.636929 containerd[2018]: time="2025-12-16T12:46:12.636769710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57ff568748-wzft8,Uid:3efdc05c-5fa9-4b11-9892-e51e34365644,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:12.677081 containerd[2018]: time="2025-12-16T12:46:12.677035476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ddd8d65c6-nlxv9,Uid:64ea5eda-7471-4a46-a060-d20c3a27b031,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:12.680946 containerd[2018]: time="2025-12-16T12:46:12.680897366Z" level=error msg="Failed to destroy network for sandbox \"4b008c7b64797f772842d612fd30a283e6a70057280580f7bddf765c0494a07d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.687772 containerd[2018]: time="2025-12-16T12:46:12.687721541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57ff568748-wzft8,Uid:3efdc05c-5fa9-4b11-9892-e51e34365644,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b008c7b64797f772842d612fd30a283e6a70057280580f7bddf765c0494a07d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.688250 kubelet[3651]: E1216 12:46:12.688127 3651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b008c7b64797f772842d612fd30a283e6a70057280580f7bddf765c0494a07d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.689079 kubelet[3651]: E1216 12:46:12.688941 3651 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b008c7b64797f772842d612fd30a283e6a70057280580f7bddf765c0494a07d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57ff568748-wzft8" Dec 16 12:46:12.689079 kubelet[3651]: E1216 12:46:12.688989 3651 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b008c7b64797f772842d612fd30a283e6a70057280580f7bddf765c0494a07d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57ff568748-wzft8" Dec 16 12:46:12.689079 kubelet[3651]: E1216 12:46:12.689026 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57ff568748-wzft8_calico-system(3efdc05c-5fa9-4b11-9892-e51e34365644)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57ff568748-wzft8_calico-system(3efdc05c-5fa9-4b11-9892-e51e34365644)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b008c7b64797f772842d612fd30a283e6a70057280580f7bddf765c0494a07d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57ff568748-wzft8" podUID="3efdc05c-5fa9-4b11-9892-e51e34365644" Dec 16 12:46:12.692043 containerd[2018]: time="2025-12-16T12:46:12.691955379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:46:12.698913 containerd[2018]: time="2025-12-16T12:46:12.698710304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b4c77bdf-4jfxv,Uid:e19615e9-eae1-4066-8da3-a07943f9e95e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:12.710882 containerd[2018]: time="2025-12-16T12:46:12.710809645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-llq4t,Uid:828880ea-211a-4230-af15-b5fa7bcbc734,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:12.722088 containerd[2018]: time="2025-12-16T12:46:12.722046696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvxxz,Uid:2abb66c6-2c22-4302-a249-0480581958a4,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:12.733334 containerd[2018]: time="2025-12-16T12:46:12.733190055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b4c77bdf-sh2bd,Uid:31923b4e-d7e7-4360-b824-f299f181acf0,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:12.742659 containerd[2018]: time="2025-12-16T12:46:12.742616256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t79gj,Uid:65a10ec6-3cde-4628-b3e7-3d3f8c440052,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:12.750136 containerd[2018]: time="2025-12-16T12:46:12.750013338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb597c7bd-fs82z,Uid:f12e3f4c-f803-496f-aa7c-d8e02fdb59ff,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:12.752426 containerd[2018]: time="2025-12-16T12:46:12.752266721Z" level=error msg="Failed to destroy network for sandbox \"c0ed15b6b1161131b3b2d7c7ae5c8e8ea3da2343a196c0e93847974f4b37177a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.770990 containerd[2018]: time="2025-12-16T12:46:12.770222311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ddd8d65c6-nlxv9,Uid:64ea5eda-7471-4a46-a060-d20c3a27b031,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ed15b6b1161131b3b2d7c7ae5c8e8ea3da2343a196c0e93847974f4b37177a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.771195 kubelet[3651]: E1216 12:46:12.770494 3651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ed15b6b1161131b3b2d7c7ae5c8e8ea3da2343a196c0e93847974f4b37177a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.771195 kubelet[3651]: E1216 12:46:12.770563 3651 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ed15b6b1161131b3b2d7c7ae5c8e8ea3da2343a196c0e93847974f4b37177a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" Dec 16 12:46:12.771195 kubelet[3651]: E1216 12:46:12.770582 3651 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ed15b6b1161131b3b2d7c7ae5c8e8ea3da2343a196c0e93847974f4b37177a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" Dec 16 12:46:12.771281 kubelet[3651]: E1216 12:46:12.770632 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6ddd8d65c6-nlxv9_calico-system(64ea5eda-7471-4a46-a060-d20c3a27b031)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6ddd8d65c6-nlxv9_calico-system(64ea5eda-7471-4a46-a060-d20c3a27b031)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0ed15b6b1161131b3b2d7c7ae5c8e8ea3da2343a196c0e93847974f4b37177a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:46:12.861637 containerd[2018]: time="2025-12-16T12:46:12.861373290Z" level=error msg="Failed to destroy network for sandbox \"16ed105456e5fe7a642cb278a34953cf8037cc6daf343b0d250dc745af199476\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.869208 containerd[2018]: time="2025-12-16T12:46:12.869096110Z" level=error msg="Failed to destroy network for sandbox \"239af16d258f83f1f7972b764868138e12fe8a0e8a7df368bfba6b0b1458effc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.881558 containerd[2018]: time="2025-12-16T12:46:12.881395585Z" level=error msg="Failed to destroy network for sandbox \"2b765d817260732aa322a6d14f42475cf4405b0a8b526a1328c2d1dc06ad62e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.881724 containerd[2018]: time="2025-12-16T12:46:12.881686219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b4c77bdf-4jfxv,Uid:e19615e9-eae1-4066-8da3-a07943f9e95e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ed105456e5fe7a642cb278a34953cf8037cc6daf343b0d250dc745af199476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.882669 kubelet[3651]: E1216 12:46:12.882552 3651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ed105456e5fe7a642cb278a34953cf8037cc6daf343b0d250dc745af199476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.882669 kubelet[3651]: E1216 12:46:12.882626 3651 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ed105456e5fe7a642cb278a34953cf8037cc6daf343b0d250dc745af199476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" Dec 16 12:46:12.882669 kubelet[3651]: E1216 12:46:12.882645 3651 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ed105456e5fe7a642cb278a34953cf8037cc6daf343b0d250dc745af199476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" Dec 16 12:46:12.883853 kubelet[3651]: E1216 12:46:12.882714 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77b4c77bdf-4jfxv_calico-apiserver(e19615e9-eae1-4066-8da3-a07943f9e95e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77b4c77bdf-4jfxv_calico-apiserver(e19615e9-eae1-4066-8da3-a07943f9e95e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16ed105456e5fe7a642cb278a34953cf8037cc6daf343b0d250dc745af199476\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:46:12.888046 containerd[2018]: time="2025-12-16T12:46:12.887210689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvxxz,Uid:2abb66c6-2c22-4302-a249-0480581958a4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"239af16d258f83f1f7972b764868138e12fe8a0e8a7df368bfba6b0b1458effc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.888712 kubelet[3651]: E1216 12:46:12.888646 3651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"239af16d258f83f1f7972b764868138e12fe8a0e8a7df368bfba6b0b1458effc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.890573 kubelet[3651]: E1216 12:46:12.888823 3651 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"239af16d258f83f1f7972b764868138e12fe8a0e8a7df368bfba6b0b1458effc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tvxxz" Dec 16 12:46:12.890573 kubelet[3651]: E1216 12:46:12.889508 3651 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"239af16d258f83f1f7972b764868138e12fe8a0e8a7df368bfba6b0b1458effc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tvxxz" Dec 16 12:46:12.890573 kubelet[3651]: E1216 12:46:12.889603 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tvxxz_kube-system(2abb66c6-2c22-4302-a249-0480581958a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tvxxz_kube-system(2abb66c6-2c22-4302-a249-0480581958a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"239af16d258f83f1f7972b764868138e12fe8a0e8a7df368bfba6b0b1458effc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tvxxz" podUID="2abb66c6-2c22-4302-a249-0480581958a4" Dec 16 12:46:12.893783 containerd[2018]: time="2025-12-16T12:46:12.893284384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-llq4t,Uid:828880ea-211a-4230-af15-b5fa7bcbc734,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b765d817260732aa322a6d14f42475cf4405b0a8b526a1328c2d1dc06ad62e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.893946 kubelet[3651]: E1216 12:46:12.893602 3651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b765d817260732aa322a6d14f42475cf4405b0a8b526a1328c2d1dc06ad62e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.893946 kubelet[3651]: E1216 12:46:12.893660 3651 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b765d817260732aa322a6d14f42475cf4405b0a8b526a1328c2d1dc06ad62e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-llq4t" Dec 16 12:46:12.893946 kubelet[3651]: E1216 12:46:12.893676 3651 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b765d817260732aa322a6d14f42475cf4405b0a8b526a1328c2d1dc06ad62e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-llq4t" Dec 16 12:46:12.894027 kubelet[3651]: E1216 12:46:12.893721 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-llq4t_calico-system(828880ea-211a-4230-af15-b5fa7bcbc734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-llq4t_calico-system(828880ea-211a-4230-af15-b5fa7bcbc734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b765d817260732aa322a6d14f42475cf4405b0a8b526a1328c2d1dc06ad62e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:46:12.894225 containerd[2018]: time="2025-12-16T12:46:12.894184509Z" level=error msg="Failed to destroy network for sandbox \"7ba4673d92dc6ef9ac00704ee8591de50019d230215dd8f496ada52eca41b1a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.900212 containerd[2018]: time="2025-12-16T12:46:12.900152121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b4c77bdf-sh2bd,Uid:31923b4e-d7e7-4360-b824-f299f181acf0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba4673d92dc6ef9ac00704ee8591de50019d230215dd8f496ada52eca41b1a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.900992 kubelet[3651]: E1216 12:46:12.900399 3651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba4673d92dc6ef9ac00704ee8591de50019d230215dd8f496ada52eca41b1a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.900992 kubelet[3651]: E1216 12:46:12.900449 3651 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba4673d92dc6ef9ac00704ee8591de50019d230215dd8f496ada52eca41b1a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" Dec 16 12:46:12.900992 kubelet[3651]: E1216 12:46:12.900466 3651 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba4673d92dc6ef9ac00704ee8591de50019d230215dd8f496ada52eca41b1a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" Dec 16 12:46:12.901091 kubelet[3651]: E1216 12:46:12.900505 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77b4c77bdf-sh2bd_calico-apiserver(31923b4e-d7e7-4360-b824-f299f181acf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77b4c77bdf-sh2bd_calico-apiserver(31923b4e-d7e7-4360-b824-f299f181acf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ba4673d92dc6ef9ac00704ee8591de50019d230215dd8f496ada52eca41b1a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:46:12.911899 containerd[2018]: time="2025-12-16T12:46:12.911843242Z" level=error msg="Failed to destroy network for sandbox \"ad5fb288350c114b09c7662a07a46c2ae961b58123d5892685fbe6c9c5fd4a9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.915958 containerd[2018]: time="2025-12-16T12:46:12.915890961Z" level=error msg="Failed to destroy network for sandbox \"3f1a5d132b876b7d90e2f96247916e14098350dd3953dab4c12bdddab3814ad6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.927338 containerd[2018]: time="2025-12-16T12:46:12.927221775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb597c7bd-fs82z,Uid:f12e3f4c-f803-496f-aa7c-d8e02fdb59ff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5fb288350c114b09c7662a07a46c2ae961b58123d5892685fbe6c9c5fd4a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.929150 kubelet[3651]: E1216 12:46:12.927776 3651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5fb288350c114b09c7662a07a46c2ae961b58123d5892685fbe6c9c5fd4a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.929150 kubelet[3651]: E1216 12:46:12.927859 3651 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5fb288350c114b09c7662a07a46c2ae961b58123d5892685fbe6c9c5fd4a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" Dec 16 12:46:12.929150 kubelet[3651]: E1216 12:46:12.927931 3651 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5fb288350c114b09c7662a07a46c2ae961b58123d5892685fbe6c9c5fd4a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" Dec 16 12:46:12.929305 kubelet[3651]: E1216 12:46:12.928011 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb597c7bd-fs82z_calico-apiserver(f12e3f4c-f803-496f-aa7c-d8e02fdb59ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb597c7bd-fs82z_calico-apiserver(f12e3f4c-f803-496f-aa7c-d8e02fdb59ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad5fb288350c114b09c7662a07a46c2ae961b58123d5892685fbe6c9c5fd4a9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:46:12.932963 containerd[2018]: time="2025-12-16T12:46:12.932915266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t79gj,Uid:65a10ec6-3cde-4628-b3e7-3d3f8c440052,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f1a5d132b876b7d90e2f96247916e14098350dd3953dab4c12bdddab3814ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.933393 kubelet[3651]: E1216 12:46:12.933347 3651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f1a5d132b876b7d90e2f96247916e14098350dd3953dab4c12bdddab3814ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:12.933471 kubelet[3651]: E1216 12:46:12.933413 3651 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f1a5d132b876b7d90e2f96247916e14098350dd3953dab4c12bdddab3814ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t79gj" Dec 16 12:46:12.933471 kubelet[3651]: E1216 12:46:12.933432 3651 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f1a5d132b876b7d90e2f96247916e14098350dd3953dab4c12bdddab3814ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t79gj" Dec 16 12:46:12.933517 kubelet[3651]: E1216 12:46:12.933479 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t79gj_kube-system(65a10ec6-3cde-4628-b3e7-3d3f8c440052)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t79gj_kube-system(65a10ec6-3cde-4628-b3e7-3d3f8c440052)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f1a5d132b876b7d90e2f96247916e14098350dd3953dab4c12bdddab3814ad6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t79gj" podUID="65a10ec6-3cde-4628-b3e7-3d3f8c440052" Dec 16 12:46:17.979277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2802676697.mount: Deactivated successfully. Dec 16 12:46:18.304170 containerd[2018]: time="2025-12-16T12:46:18.303612145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:18.308722 containerd[2018]: time="2025-12-16T12:46:18.308664400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:46:18.311312 containerd[2018]: time="2025-12-16T12:46:18.311248938Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:18.323159 containerd[2018]: time="2025-12-16T12:46:18.323075943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:18.323800 containerd[2018]: time="2025-12-16T12:46:18.323521893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 5.631517369s" Dec 16 12:46:18.323800 containerd[2018]: time="2025-12-16T12:46:18.323558118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:46:18.344728 containerd[2018]: time="2025-12-16T12:46:18.344680648Z" level=info msg="CreateContainer within sandbox \"8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:46:18.366890 containerd[2018]: time="2025-12-16T12:46:18.366783994Z" level=info msg="Container e21650db1c32199f9db268114543e5754a9eea007106666bc431390051be88b8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:18.368703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256710386.mount: Deactivated successfully. Dec 16 12:46:18.381015 containerd[2018]: time="2025-12-16T12:46:18.380963665Z" level=info msg="CreateContainer within sandbox \"8ae748c2667b10869f815e8826c453519ded9f784c8e82654c988f09c8f31216\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e21650db1c32199f9db268114543e5754a9eea007106666bc431390051be88b8\"" Dec 16 12:46:18.381692 containerd[2018]: time="2025-12-16T12:46:18.381661447Z" level=info msg="StartContainer for \"e21650db1c32199f9db268114543e5754a9eea007106666bc431390051be88b8\"" Dec 16 12:46:18.383233 containerd[2018]: time="2025-12-16T12:46:18.383200007Z" level=info msg="connecting to shim e21650db1c32199f9db268114543e5754a9eea007106666bc431390051be88b8" address="unix:///run/containerd/s/0ce5851f4fa2941d87c8688522bd9a92b17dbcfba7d6c2d331088b1d8b2a3604" protocol=ttrpc version=3 Dec 16 12:46:18.419135 systemd[1]: Started cri-containerd-e21650db1c32199f9db268114543e5754a9eea007106666bc431390051be88b8.scope - libcontainer container e21650db1c32199f9db268114543e5754a9eea007106666bc431390051be88b8. Dec 16 12:46:18.476000 audit: BPF prog-id=195 op=LOAD Dec 16 12:46:18.480526 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:46:18.480594 kernel: audit: type=1334 audit(1765889178.476:584): prog-id=195 op=LOAD Dec 16 12:46:18.476000 audit[4608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4112 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.501228 kernel: audit: type=1300 audit(1765889178.476:584): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4112 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532313635306462316333323139396639646232363831313435343365 Dec 16 12:46:18.517866 kernel: audit: type=1327 audit(1765889178.476:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532313635306462316333323139396639646232363831313435343365 Dec 16 12:46:18.522522 kernel: audit: type=1334 audit(1765889178.481:585): prog-id=196 op=LOAD Dec 16 12:46:18.481000 audit: BPF prog-id=196 op=LOAD Dec 16 12:46:18.481000 audit[4608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4112 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.539138 kernel: audit: type=1300 audit(1765889178.481:585): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4112 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532313635306462316333323139396639646232363831313435343365 Dec 16 12:46:18.556918 kernel: audit: type=1327 audit(1765889178.481:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532313635306462316333323139396639646232363831313435343365 Dec 16 12:46:18.484000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:46:18.563854 kernel: audit: type=1334 audit(1765889178.484:586): prog-id=196 op=UNLOAD Dec 16 12:46:18.564000 kernel: audit: type=1300 audit(1765889178.484:586): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.484000 audit[4608]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532313635306462316333323139396639646232363831313435343365 Dec 16 12:46:18.598258 kernel: audit: type=1327 audit(1765889178.484:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532313635306462316333323139396639646232363831313435343365 Dec 16 12:46:18.484000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:46:18.605956 kernel: audit: type=1334 audit(1765889178.484:587): prog-id=195 op=UNLOAD Dec 16 12:46:18.484000 audit[4608]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4112 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532313635306462316333323139396639646232363831313435343365 Dec 16 12:46:18.484000 audit: BPF prog-id=197 op=LOAD Dec 16 12:46:18.484000 audit[4608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4112 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532313635306462316333323139396639646232363831313435343365 Dec 16 12:46:18.625304 containerd[2018]: time="2025-12-16T12:46:18.625194544Z" level=info msg="StartContainer for \"e21650db1c32199f9db268114543e5754a9eea007106666bc431390051be88b8\" returns successfully" Dec 16 12:46:18.727352 kubelet[3651]: I1216 12:46:18.727239 3651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tvmc8" podStartSLOduration=1.198579381 podStartE2EDuration="17.727218642s" podCreationTimestamp="2025-12-16 12:46:01 +0000 UTC" firstStartedPulling="2025-12-16 12:46:01.795640896 +0000 UTC m=+20.329344462" lastFinishedPulling="2025-12-16 12:46:18.324280157 +0000 UTC m=+36.857983723" observedRunningTime="2025-12-16 12:46:18.726089094 +0000 UTC m=+37.259792660" watchObservedRunningTime="2025-12-16 12:46:18.727218642 +0000 UTC m=+37.260922208" Dec 16 12:46:18.927931 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:46:18.928135 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:46:19.145639 kubelet[3651]: I1216 12:46:19.145576 3651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3efdc05c-5fa9-4b11-9892-e51e34365644-whisker-backend-key-pair\") pod \"3efdc05c-5fa9-4b11-9892-e51e34365644\" (UID: \"3efdc05c-5fa9-4b11-9892-e51e34365644\") " Dec 16 12:46:19.145639 kubelet[3651]: I1216 12:46:19.145637 3651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-679l2\" (UniqueName: \"kubernetes.io/projected/3efdc05c-5fa9-4b11-9892-e51e34365644-kube-api-access-679l2\") pod \"3efdc05c-5fa9-4b11-9892-e51e34365644\" (UID: \"3efdc05c-5fa9-4b11-9892-e51e34365644\") " Dec 16 12:46:19.145925 kubelet[3651]: I1216 12:46:19.145668 3651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3efdc05c-5fa9-4b11-9892-e51e34365644-whisker-ca-bundle\") pod \"3efdc05c-5fa9-4b11-9892-e51e34365644\" (UID: \"3efdc05c-5fa9-4b11-9892-e51e34365644\") " Dec 16 12:46:19.149407 kubelet[3651]: I1216 12:46:19.148484 3651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3efdc05c-5fa9-4b11-9892-e51e34365644-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3efdc05c-5fa9-4b11-9892-e51e34365644" (UID: "3efdc05c-5fa9-4b11-9892-e51e34365644"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:46:19.151676 systemd[1]: var-lib-kubelet-pods-3efdc05c\x2d5fa9\x2d4b11\x2d9892\x2de51e34365644-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d679l2.mount: Deactivated successfully. Dec 16 12:46:19.152168 systemd[1]: var-lib-kubelet-pods-3efdc05c\x2d5fa9\x2d4b11\x2d9892\x2de51e34365644-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:46:19.154389 kubelet[3651]: I1216 12:46:19.154340 3651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efdc05c-5fa9-4b11-9892-e51e34365644-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3efdc05c-5fa9-4b11-9892-e51e34365644" (UID: "3efdc05c-5fa9-4b11-9892-e51e34365644"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:46:19.155664 kubelet[3651]: I1216 12:46:19.155613 3651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efdc05c-5fa9-4b11-9892-e51e34365644-kube-api-access-679l2" (OuterVolumeSpecName: "kube-api-access-679l2") pod "3efdc05c-5fa9-4b11-9892-e51e34365644" (UID: "3efdc05c-5fa9-4b11-9892-e51e34365644"). InnerVolumeSpecName "kube-api-access-679l2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:46:19.246499 kubelet[3651]: I1216 12:46:19.246427 3651 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3efdc05c-5fa9-4b11-9892-e51e34365644-whisker-ca-bundle\") on node \"ci-4515.1.0-a-6d618b7fe6\" DevicePath \"\"" Dec 16 12:46:19.246499 kubelet[3651]: I1216 12:46:19.246463 3651 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3efdc05c-5fa9-4b11-9892-e51e34365644-whisker-backend-key-pair\") on node \"ci-4515.1.0-a-6d618b7fe6\" DevicePath \"\"" Dec 16 12:46:19.246499 kubelet[3651]: I1216 12:46:19.246474 3651 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-679l2\" (UniqueName: \"kubernetes.io/projected/3efdc05c-5fa9-4b11-9892-e51e34365644-kube-api-access-679l2\") on node \"ci-4515.1.0-a-6d618b7fe6\" DevicePath \"\"" Dec 16 12:46:19.571005 systemd[1]: Removed slice kubepods-besteffort-pod3efdc05c_5fa9_4b11_9892_e51e34365644.slice - libcontainer container kubepods-besteffort-pod3efdc05c_5fa9_4b11_9892_e51e34365644.slice. Dec 16 12:46:19.799388 systemd[1]: Created slice kubepods-besteffort-pod12f0bf61_64a1_4c2f_bbd3_977cfb8492eb.slice - libcontainer container kubepods-besteffort-pod12f0bf61_64a1_4c2f_bbd3_977cfb8492eb.slice. Dec 16 12:46:19.850389 kubelet[3651]: I1216 12:46:19.850178 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-454zl\" (UniqueName: \"kubernetes.io/projected/12f0bf61-64a1-4c2f-bbd3-977cfb8492eb-kube-api-access-454zl\") pod \"whisker-ddcd6787d-54cfx\" (UID: \"12f0bf61-64a1-4c2f-bbd3-977cfb8492eb\") " pod="calico-system/whisker-ddcd6787d-54cfx" Dec 16 12:46:19.850389 kubelet[3651]: I1216 12:46:19.850233 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/12f0bf61-64a1-4c2f-bbd3-977cfb8492eb-whisker-backend-key-pair\") pod \"whisker-ddcd6787d-54cfx\" (UID: \"12f0bf61-64a1-4c2f-bbd3-977cfb8492eb\") " pod="calico-system/whisker-ddcd6787d-54cfx" Dec 16 12:46:19.850389 kubelet[3651]: I1216 12:46:19.850262 3651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12f0bf61-64a1-4c2f-bbd3-977cfb8492eb-whisker-ca-bundle\") pod \"whisker-ddcd6787d-54cfx\" (UID: \"12f0bf61-64a1-4c2f-bbd3-977cfb8492eb\") " pod="calico-system/whisker-ddcd6787d-54cfx" Dec 16 12:46:20.103171 containerd[2018]: time="2025-12-16T12:46:20.103041687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ddcd6787d-54cfx,Uid:12f0bf61-64a1-4c2f-bbd3-977cfb8492eb,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:20.251415 systemd-networkd[1715]: cali90b707505ee: Link UP Dec 16 12:46:20.251865 systemd-networkd[1715]: cali90b707505ee: Gained carrier Dec 16 12:46:20.275318 containerd[2018]: 2025-12-16 12:46:20.131 [INFO][4722] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:20.275318 containerd[2018]: 2025-12-16 12:46:20.181 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0 whisker-ddcd6787d- calico-system 12f0bf61-64a1-4c2f-bbd3-977cfb8492eb 903 0 2025-12-16 12:46:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:ddcd6787d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-a-6d618b7fe6 whisker-ddcd6787d-54cfx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali90b707505ee [] [] }} ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Namespace="calico-system" Pod="whisker-ddcd6787d-54cfx" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-" Dec 16 12:46:20.275318 containerd[2018]: 2025-12-16 12:46:20.182 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Namespace="calico-system" Pod="whisker-ddcd6787d-54cfx" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" Dec 16 12:46:20.275318 containerd[2018]: 2025-12-16 12:46:20.203 [INFO][4735] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" HandleID="k8s-pod-network.c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" Dec 16 12:46:20.275580 containerd[2018]: 2025-12-16 12:46:20.203 [INFO][4735] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" HandleID="k8s-pod-network.c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-6d618b7fe6", "pod":"whisker-ddcd6787d-54cfx", "timestamp":"2025-12-16 12:46:20.203461703 +0000 UTC"}, Hostname:"ci-4515.1.0-a-6d618b7fe6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:20.275580 containerd[2018]: 2025-12-16 12:46:20.203 [INFO][4735] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:20.275580 containerd[2018]: 2025-12-16 12:46:20.203 [INFO][4735] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:20.275580 containerd[2018]: 2025-12-16 12:46:20.203 [INFO][4735] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-6d618b7fe6' Dec 16 12:46:20.275580 containerd[2018]: 2025-12-16 12:46:20.209 [INFO][4735] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:20.275580 containerd[2018]: 2025-12-16 12:46:20.213 [INFO][4735] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:20.275580 containerd[2018]: 2025-12-16 12:46:20.217 [INFO][4735] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:20.275580 containerd[2018]: 2025-12-16 12:46:20.219 [INFO][4735] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:20.275580 containerd[2018]: 2025-12-16 12:46:20.221 [INFO][4735] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:20.275724 containerd[2018]: 2025-12-16 12:46:20.221 [INFO][4735] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:20.275724 containerd[2018]: 2025-12-16 12:46:20.223 [INFO][4735] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4 Dec 16 12:46:20.275724 containerd[2018]: 2025-12-16 12:46:20.227 [INFO][4735] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:20.275724 containerd[2018]: 2025-12-16 12:46:20.235 [INFO][4735] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.129/26] block=192.168.127.128/26 handle="k8s-pod-network.c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:20.275724 containerd[2018]: 2025-12-16 12:46:20.236 [INFO][4735] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.129/26] handle="k8s-pod-network.c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:20.275724 containerd[2018]: 2025-12-16 12:46:20.236 [INFO][4735] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:20.275724 containerd[2018]: 2025-12-16 12:46:20.238 [INFO][4735] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.129/26] IPv6=[] ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" HandleID="k8s-pod-network.c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" Dec 16 12:46:20.275817 containerd[2018]: 2025-12-16 12:46:20.242 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Namespace="calico-system" Pod="whisker-ddcd6787d-54cfx" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0", GenerateName:"whisker-ddcd6787d-", Namespace:"calico-system", SelfLink:"", UID:"12f0bf61-64a1-4c2f-bbd3-977cfb8492eb", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ddcd6787d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"", Pod:"whisker-ddcd6787d-54cfx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90b707505ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:20.275817 containerd[2018]: 2025-12-16 12:46:20.242 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.129/32] ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Namespace="calico-system" Pod="whisker-ddcd6787d-54cfx" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" Dec 16 12:46:20.275863 containerd[2018]: 2025-12-16 12:46:20.242 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90b707505ee ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Namespace="calico-system" Pod="whisker-ddcd6787d-54cfx" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" Dec 16 12:46:20.275863 containerd[2018]: 2025-12-16 12:46:20.252 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Namespace="calico-system" Pod="whisker-ddcd6787d-54cfx" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" Dec 16 12:46:20.276214 containerd[2018]: 2025-12-16 12:46:20.253 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Namespace="calico-system" Pod="whisker-ddcd6787d-54cfx" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0", GenerateName:"whisker-ddcd6787d-", Namespace:"calico-system", SelfLink:"", UID:"12f0bf61-64a1-4c2f-bbd3-977cfb8492eb", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ddcd6787d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4", Pod:"whisker-ddcd6787d-54cfx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90b707505ee", MAC:"82:15:3d:b4:fb:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:20.276262 containerd[2018]: 2025-12-16 12:46:20.269 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" Namespace="calico-system" Pod="whisker-ddcd6787d-54cfx" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-whisker--ddcd6787d--54cfx-eth0" Dec 16 12:46:20.344229 containerd[2018]: time="2025-12-16T12:46:20.343464564Z" level=info msg="connecting to shim c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4" address="unix:///run/containerd/s/cd0a339941a69bf0d25810d5cde7be6589dd520f5541cefd68094b677350225a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:20.397464 systemd[1]: Started cri-containerd-c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4.scope - libcontainer container c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4. Dec 16 12:46:20.429000 audit: BPF prog-id=198 op=LOAD Dec 16 12:46:20.429000 audit: BPF prog-id=199 op=LOAD Dec 16 12:46:20.429000 audit[4833]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4815 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336636232656439326362663362653866306237323630653036653538 Dec 16 12:46:20.430000 audit: BPF prog-id=199 op=UNLOAD Dec 16 12:46:20.430000 audit[4833]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4815 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336636232656439326362663362653866306237323630653036653538 Dec 16 12:46:20.430000 audit: BPF prog-id=200 op=LOAD Dec 16 12:46:20.430000 audit[4833]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4815 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336636232656439326362663362653866306237323630653036653538 Dec 16 12:46:20.430000 audit: BPF prog-id=201 op=LOAD Dec 16 12:46:20.430000 audit[4833]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4815 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336636232656439326362663362653866306237323630653036653538 Dec 16 12:46:20.430000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:46:20.430000 audit[4833]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4815 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336636232656439326362663362653866306237323630653036653538 Dec 16 12:46:20.430000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:46:20.430000 audit[4833]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4815 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336636232656439326362663362653866306237323630653036653538 Dec 16 12:46:20.430000 audit: BPF prog-id=202 op=LOAD Dec 16 12:46:20.430000 audit[4833]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4815 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:20.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336636232656439326362663362653866306237323630653036653538 Dec 16 12:46:20.538411 containerd[2018]: time="2025-12-16T12:46:20.538365424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ddcd6787d-54cfx,Uid:12f0bf61-64a1-4c2f-bbd3-977cfb8492eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6cb2ed92cbf3be8f0b7260e06e58d37f9555cba5975402dbe63c8b9561cecd4\"" Dec 16 12:46:20.540571 containerd[2018]: time="2025-12-16T12:46:20.540390666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:46:20.794704 containerd[2018]: time="2025-12-16T12:46:20.794637829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:20.797285 containerd[2018]: time="2025-12-16T12:46:20.797219728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:46:20.797550 containerd[2018]: time="2025-12-16T12:46:20.797220944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:20.799659 kubelet[3651]: E1216 12:46:20.799607 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:20.799716 kubelet[3651]: E1216 12:46:20.799687 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:20.806255 kubelet[3651]: E1216 12:46:20.806197 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a79f5c30704f47cba759fbc41fd0b2a3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-454zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ddcd6787d-54cfx_calico-system(12f0bf61-64a1-4c2f-bbd3-977cfb8492eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:20.809041 containerd[2018]: time="2025-12-16T12:46:20.808977547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:46:21.105726 containerd[2018]: time="2025-12-16T12:46:21.105586476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:21.109766 containerd[2018]: time="2025-12-16T12:46:21.109634824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:46:21.109766 containerd[2018]: time="2025-12-16T12:46:21.109636352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:21.110089 kubelet[3651]: E1216 12:46:21.110049 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:21.110579 kubelet[3651]: E1216 12:46:21.110416 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:21.111030 kubelet[3651]: E1216 12:46:21.110951 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ddcd6787d-54cfx_calico-system(12f0bf61-64a1-4c2f-bbd3-977cfb8492eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:21.112317 kubelet[3651]: E1216 12:46:21.112250 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:46:21.570089 kubelet[3651]: I1216 12:46:21.570034 3651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3efdc05c-5fa9-4b11-9892-e51e34365644" path="/var/lib/kubelet/pods/3efdc05c-5fa9-4b11-9892-e51e34365644/volumes" Dec 16 12:46:21.718391 kubelet[3651]: E1216 12:46:21.718323 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:46:21.744000 audit[4909]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=4909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:21.744000 audit[4909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff84da6c0 a2=0 a3=1 items=0 ppid=3809 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:21.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:21.749000 audit[4909]: NETFILTER_CFG table=nat:123 family=2 entries=12 op=nft_register_rule pid=4909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:21.749000 audit[4909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff84da6c0 a2=0 a3=1 items=0 ppid=3809 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:21.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:22.027016 systemd-networkd[1715]: cali90b707505ee: Gained IPv6LL Dec 16 12:46:23.566888 containerd[2018]: time="2025-12-16T12:46:23.566838369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t79gj,Uid:65a10ec6-3cde-4628-b3e7-3d3f8c440052,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:23.777986 systemd-networkd[1715]: cali81a1f4e4274: Link UP Dec 16 12:46:23.781069 systemd-networkd[1715]: cali81a1f4e4274: Gained carrier Dec 16 12:46:23.822862 containerd[2018]: 2025-12-16 12:46:23.603 [INFO][4934] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:23.822862 containerd[2018]: 2025-12-16 12:46:23.614 [INFO][4934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0 coredns-674b8bbfcf- kube-system 65a10ec6-3cde-4628-b3e7-3d3f8c440052 837 0 2025-12-16 12:45:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-6d618b7fe6 coredns-674b8bbfcf-t79gj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81a1f4e4274 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Namespace="kube-system" Pod="coredns-674b8bbfcf-t79gj" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-" Dec 16 12:46:23.822862 containerd[2018]: 2025-12-16 12:46:23.615 [INFO][4934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Namespace="kube-system" Pod="coredns-674b8bbfcf-t79gj" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" Dec 16 12:46:23.822862 containerd[2018]: 2025-12-16 12:46:23.643 [INFO][4945] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" HandleID="k8s-pod-network.0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" Dec 16 12:46:23.823225 containerd[2018]: 2025-12-16 12:46:23.643 [INFO][4945] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" HandleID="k8s-pod-network.0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-6d618b7fe6", "pod":"coredns-674b8bbfcf-t79gj", "timestamp":"2025-12-16 12:46:23.643353055 +0000 UTC"}, Hostname:"ci-4515.1.0-a-6d618b7fe6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:23.823225 containerd[2018]: 2025-12-16 12:46:23.643 [INFO][4945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:23.823225 containerd[2018]: 2025-12-16 12:46:23.643 [INFO][4945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:23.823225 containerd[2018]: 2025-12-16 12:46:23.643 [INFO][4945] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-6d618b7fe6' Dec 16 12:46:23.823225 containerd[2018]: 2025-12-16 12:46:23.651 [INFO][4945] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:23.823225 containerd[2018]: 2025-12-16 12:46:23.728 [INFO][4945] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:23.823225 containerd[2018]: 2025-12-16 12:46:23.734 [INFO][4945] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:23.823225 containerd[2018]: 2025-12-16 12:46:23.737 [INFO][4945] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:23.823225 containerd[2018]: 2025-12-16 12:46:23.741 [INFO][4945] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:23.823375 containerd[2018]: 2025-12-16 12:46:23.742 [INFO][4945] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:23.823375 containerd[2018]: 2025-12-16 12:46:23.747 [INFO][4945] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa Dec 16 12:46:23.823375 containerd[2018]: 2025-12-16 12:46:23.754 [INFO][4945] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:23.823375 containerd[2018]: 2025-12-16 12:46:23.769 [INFO][4945] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.130/26] block=192.168.127.128/26 handle="k8s-pod-network.0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:23.823375 containerd[2018]: 2025-12-16 12:46:23.770 [INFO][4945] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.130/26] handle="k8s-pod-network.0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:23.823375 containerd[2018]: 2025-12-16 12:46:23.770 [INFO][4945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:23.823375 containerd[2018]: 2025-12-16 12:46:23.770 [INFO][4945] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.130/26] IPv6=[] ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" HandleID="k8s-pod-network.0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" Dec 16 12:46:23.823481 containerd[2018]: 2025-12-16 12:46:23.774 [INFO][4934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Namespace="kube-system" Pod="coredns-674b8bbfcf-t79gj" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"65a10ec6-3cde-4628-b3e7-3d3f8c440052", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"", Pod:"coredns-674b8bbfcf-t79gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81a1f4e4274", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:23.823481 containerd[2018]: 2025-12-16 12:46:23.774 [INFO][4934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.130/32] ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Namespace="kube-system" Pod="coredns-674b8bbfcf-t79gj" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" Dec 16 12:46:23.823481 containerd[2018]: 2025-12-16 12:46:23.774 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81a1f4e4274 ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Namespace="kube-system" Pod="coredns-674b8bbfcf-t79gj" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" Dec 16 12:46:23.823481 containerd[2018]: 2025-12-16 12:46:23.781 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Namespace="kube-system" Pod="coredns-674b8bbfcf-t79gj" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" Dec 16 12:46:23.823481 containerd[2018]: 2025-12-16 12:46:23.781 [INFO][4934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Namespace="kube-system" Pod="coredns-674b8bbfcf-t79gj" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"65a10ec6-3cde-4628-b3e7-3d3f8c440052", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa", Pod:"coredns-674b8bbfcf-t79gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81a1f4e4274", MAC:"6e:11:53:22:5d:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:23.823481 containerd[2018]: 2025-12-16 12:46:23.817 [INFO][4934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" Namespace="kube-system" Pod="coredns-674b8bbfcf-t79gj" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--t79gj-eth0" Dec 16 12:46:23.864349 containerd[2018]: time="2025-12-16T12:46:23.864110317Z" level=info msg="connecting to shim 0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa" address="unix:///run/containerd/s/cdfff17db10744ff9ebf2924040372f2dbbafd83aed55a0c53252a29a75db16a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:23.902155 systemd[1]: Started cri-containerd-0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa.scope - libcontainer container 0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa. Dec 16 12:46:23.915000 audit: BPF prog-id=203 op=LOAD Dec 16 12:46:23.920474 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 16 12:46:23.920609 kernel: audit: type=1334 audit(1765889183.915:599): prog-id=203 op=LOAD Dec 16 12:46:23.925000 audit: BPF prog-id=204 op=LOAD Dec 16 12:46:23.931785 kernel: audit: type=1334 audit(1765889183.925:600): prog-id=204 op=LOAD Dec 16 12:46:23.925000 audit[4997]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:23.980088 kernel: audit: type=1300 audit(1765889183.925:600): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.981574 kernel: audit: type=1327 audit(1765889183.925:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:23.981616 kernel: audit: type=1334 audit(1765889183.925:601): prog-id=204 op=UNLOAD Dec 16 12:46:23.925000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:46:23.925000 audit[4997]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.009128 kernel: audit: type=1300 audit(1765889183.925:601): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:24.030431 kernel: audit: type=1327 audit(1765889183.925:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:23.931000 audit: BPF prog-id=205 op=LOAD Dec 16 12:46:24.036847 kernel: audit: type=1334 audit(1765889183.931:602): prog-id=205 op=LOAD Dec 16 12:46:23.931000 audit[4997]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.062153 kernel: audit: type=1300 audit(1765889183.931:602): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:24.082946 kernel: audit: type=1327 audit(1765889183.931:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:23.931000 audit: BPF prog-id=206 op=LOAD Dec 16 12:46:23.931000 audit[4997]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:23.931000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:46:23.931000 audit[4997]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:23.931000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:46:23.931000 audit[4997]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:23.931000 audit: BPF prog-id=207 op=LOAD Dec 16 12:46:23.931000 audit[4997]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062323930396630333735383836363662646235303339633563663935 Dec 16 12:46:24.241972 containerd[2018]: time="2025-12-16T12:46:24.241907475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t79gj,Uid:65a10ec6-3cde-4628-b3e7-3d3f8c440052,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa\"" Dec 16 12:46:24.398910 containerd[2018]: time="2025-12-16T12:46:24.398780030Z" level=info msg="CreateContainer within sandbox \"0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:46:24.508707 containerd[2018]: time="2025-12-16T12:46:24.508315997Z" level=info msg="Container 7447b4660467c0d804c149e0616a2e9442a33a63ec26a4e48b0fc3cb33b23f26: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:24.557575 containerd[2018]: time="2025-12-16T12:46:24.557449030Z" level=info msg="CreateContainer within sandbox \"0b2909f037588666bdb5039c5cf95b7d42d18c9394521b834c5f40d543dd7faa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7447b4660467c0d804c149e0616a2e9442a33a63ec26a4e48b0fc3cb33b23f26\"" Dec 16 12:46:24.560037 containerd[2018]: time="2025-12-16T12:46:24.559720087Z" level=info msg="StartContainer for \"7447b4660467c0d804c149e0616a2e9442a33a63ec26a4e48b0fc3cb33b23f26\"" Dec 16 12:46:24.561964 containerd[2018]: time="2025-12-16T12:46:24.561930503Z" level=info msg="connecting to shim 7447b4660467c0d804c149e0616a2e9442a33a63ec26a4e48b0fc3cb33b23f26" address="unix:///run/containerd/s/cdfff17db10744ff9ebf2924040372f2dbbafd83aed55a0c53252a29a75db16a" protocol=ttrpc version=3 Dec 16 12:46:24.581098 systemd[1]: Started cri-containerd-7447b4660467c0d804c149e0616a2e9442a33a63ec26a4e48b0fc3cb33b23f26.scope - libcontainer container 7447b4660467c0d804c149e0616a2e9442a33a63ec26a4e48b0fc3cb33b23f26. Dec 16 12:46:24.656000 audit: BPF prog-id=208 op=LOAD Dec 16 12:46:24.656000 audit: BPF prog-id=209 op=LOAD Dec 16 12:46:24.656000 audit[5028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4986 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343762343636303436376330643830346331343965303631366132 Dec 16 12:46:24.656000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:46:24.656000 audit[5028]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4986 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343762343636303436376330643830346331343965303631366132 Dec 16 12:46:24.656000 audit: BPF prog-id=210 op=LOAD Dec 16 12:46:24.656000 audit[5028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4986 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343762343636303436376330643830346331343965303631366132 Dec 16 12:46:24.656000 audit: BPF prog-id=211 op=LOAD Dec 16 12:46:24.656000 audit[5028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4986 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343762343636303436376330643830346331343965303631366132 Dec 16 12:46:24.657000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:46:24.657000 audit[5028]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4986 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343762343636303436376330643830346331343965303631366132 Dec 16 12:46:24.657000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:46:24.657000 audit[5028]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4986 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343762343636303436376330643830346331343965303631366132 Dec 16 12:46:24.657000 audit: BPF prog-id=212 op=LOAD Dec 16 12:46:24.657000 audit[5028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4986 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:24.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343762343636303436376330643830346331343965303631366132 Dec 16 12:46:24.821173 containerd[2018]: time="2025-12-16T12:46:24.821117296Z" level=info msg="StartContainer for \"7447b4660467c0d804c149e0616a2e9442a33a63ec26a4e48b0fc3cb33b23f26\" returns successfully" Dec 16 12:46:25.419002 systemd-networkd[1715]: cali81a1f4e4274: Gained IPv6LL Dec 16 12:46:25.567531 containerd[2018]: time="2025-12-16T12:46:25.567473274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ddd8d65c6-nlxv9,Uid:64ea5eda-7471-4a46-a060-d20c3a27b031,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:25.673798 systemd-networkd[1715]: cali3fc4079d513: Link UP Dec 16 12:46:25.674899 systemd-networkd[1715]: cali3fc4079d513: Gained carrier Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.599 [INFO][5080] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.608 [INFO][5080] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0 calico-kube-controllers-6ddd8d65c6- calico-system 64ea5eda-7471-4a46-a060-d20c3a27b031 832 0 2025-12-16 12:46:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6ddd8d65c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-a-6d618b7fe6 calico-kube-controllers-6ddd8d65c6-nlxv9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3fc4079d513 [] [] }} ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Namespace="calico-system" Pod="calico-kube-controllers-6ddd8d65c6-nlxv9" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.609 [INFO][5080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Namespace="calico-system" Pod="calico-kube-controllers-6ddd8d65c6-nlxv9" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.629 [INFO][5092] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" HandleID="k8s-pod-network.7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.629 [INFO][5092] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" HandleID="k8s-pod-network.7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c8fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-6d618b7fe6", "pod":"calico-kube-controllers-6ddd8d65c6-nlxv9", "timestamp":"2025-12-16 12:46:25.629380804 +0000 UTC"}, Hostname:"ci-4515.1.0-a-6d618b7fe6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.629 [INFO][5092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.629 [INFO][5092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.629 [INFO][5092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-6d618b7fe6' Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.635 [INFO][5092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.639 [INFO][5092] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.644 [INFO][5092] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.647 [INFO][5092] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.651 [INFO][5092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.651 [INFO][5092] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.653 [INFO][5092] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0 Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.660 [INFO][5092] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.666 [INFO][5092] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.131/26] block=192.168.127.128/26 handle="k8s-pod-network.7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.666 [INFO][5092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.131/26] handle="k8s-pod-network.7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.666 [INFO][5092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:25.694098 containerd[2018]: 2025-12-16 12:46:25.666 [INFO][5092] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.131/26] IPv6=[] ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" HandleID="k8s-pod-network.7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" Dec 16 12:46:25.694610 containerd[2018]: 2025-12-16 12:46:25.668 [INFO][5080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Namespace="calico-system" Pod="calico-kube-controllers-6ddd8d65c6-nlxv9" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0", GenerateName:"calico-kube-controllers-6ddd8d65c6-", Namespace:"calico-system", SelfLink:"", UID:"64ea5eda-7471-4a46-a060-d20c3a27b031", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ddd8d65c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"", Pod:"calico-kube-controllers-6ddd8d65c6-nlxv9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fc4079d513", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:25.694610 containerd[2018]: 2025-12-16 12:46:25.668 [INFO][5080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.131/32] ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Namespace="calico-system" Pod="calico-kube-controllers-6ddd8d65c6-nlxv9" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" Dec 16 12:46:25.694610 containerd[2018]: 2025-12-16 12:46:25.668 [INFO][5080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fc4079d513 ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Namespace="calico-system" Pod="calico-kube-controllers-6ddd8d65c6-nlxv9" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" Dec 16 12:46:25.694610 containerd[2018]: 2025-12-16 12:46:25.675 [INFO][5080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Namespace="calico-system" Pod="calico-kube-controllers-6ddd8d65c6-nlxv9" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" Dec 16 12:46:25.694610 containerd[2018]: 2025-12-16 12:46:25.677 [INFO][5080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Namespace="calico-system" Pod="calico-kube-controllers-6ddd8d65c6-nlxv9" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0", GenerateName:"calico-kube-controllers-6ddd8d65c6-", Namespace:"calico-system", SelfLink:"", UID:"64ea5eda-7471-4a46-a060-d20c3a27b031", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ddd8d65c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0", Pod:"calico-kube-controllers-6ddd8d65c6-nlxv9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fc4079d513", MAC:"a2:e5:6b:03:1d:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:25.694610 containerd[2018]: 2025-12-16 12:46:25.691 [INFO][5080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" Namespace="calico-system" Pod="calico-kube-controllers-6ddd8d65c6-nlxv9" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--kube--controllers--6ddd8d65c6--nlxv9-eth0" Dec 16 12:46:25.730484 containerd[2018]: time="2025-12-16T12:46:25.730384972Z" level=info msg="connecting to shim 7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0" address="unix:///run/containerd/s/673cb3f167c22cc92914fcbb1812db63869009a37df04baa0a1519e9b1f43ea5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:25.767142 systemd[1]: Started cri-containerd-7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0.scope - libcontainer container 7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0. Dec 16 12:46:25.776000 audit: BPF prog-id=213 op=LOAD Dec 16 12:46:25.777000 audit: BPF prog-id=214 op=LOAD Dec 16 12:46:25.777000 audit[5122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761613365336364313033646133333937373036656266636562613835 Dec 16 12:46:25.777000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:46:25.777000 audit[5122]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761613365336364313033646133333937373036656266636562613835 Dec 16 12:46:25.778000 audit: BPF prog-id=215 op=LOAD Dec 16 12:46:25.778000 audit[5122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761613365336364313033646133333937373036656266636562613835 Dec 16 12:46:25.778000 audit: BPF prog-id=216 op=LOAD Dec 16 12:46:25.778000 audit[5122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761613365336364313033646133333937373036656266636562613835 Dec 16 12:46:25.778000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:46:25.778000 audit[5122]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761613365336364313033646133333937373036656266636562613835 Dec 16 12:46:25.778000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:46:25.778000 audit[5122]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761613365336364313033646133333937373036656266636562613835 Dec 16 12:46:25.778000 audit: BPF prog-id=217 op=LOAD Dec 16 12:46:25.778000 audit[5122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761613365336364313033646133333937373036656266636562613835 Dec 16 12:46:25.802244 containerd[2018]: time="2025-12-16T12:46:25.802199115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ddd8d65c6-nlxv9,Uid:64ea5eda-7471-4a46-a060-d20c3a27b031,Namespace:calico-system,Attempt:0,} returns sandbox id \"7aa3e3cd103da3397706ebfceba8531134a9329c7d3aa6883edec75d3905eda0\"" Dec 16 12:46:25.804034 containerd[2018]: time="2025-12-16T12:46:25.804001719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:46:25.843300 kubelet[3651]: I1216 12:46:25.843221 3651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t79gj" podStartSLOduration=39.843109519 podStartE2EDuration="39.843109519s" podCreationTimestamp="2025-12-16 12:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:46:25.842519302 +0000 UTC m=+44.376222884" watchObservedRunningTime="2025-12-16 12:46:25.843109519 +0000 UTC m=+44.376813085" Dec 16 12:46:25.865000 audit[5157]: NETFILTER_CFG table=filter:124 family=2 entries=22 op=nft_register_rule pid=5157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:25.865000 audit[5157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffce2502e0 a2=0 a3=1 items=0 ppid=3809 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:25.871000 audit[5157]: NETFILTER_CFG table=nat:125 family=2 entries=12 op=nft_register_rule pid=5157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:25.871000 audit[5157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffce2502e0 a2=0 a3=1 items=0 ppid=3809 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:25.908000 audit[5160]: NETFILTER_CFG table=filter:126 family=2 entries=19 op=nft_register_rule pid=5160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:25.908000 audit[5160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc17fb140 a2=0 a3=1 items=0 ppid=3809 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:25.912000 audit[5160]: NETFILTER_CFG table=nat:127 family=2 entries=33 op=nft_register_chain pid=5160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:25.912000 audit[5160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffc17fb140 a2=0 a3=1 items=0 ppid=3809 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:25.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:26.026514 containerd[2018]: time="2025-12-16T12:46:26.026421325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:26.030080 containerd[2018]: time="2025-12-16T12:46:26.029918425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:46:26.030080 containerd[2018]: time="2025-12-16T12:46:26.029957891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:26.030284 kubelet[3651]: E1216 12:46:26.030226 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:26.030329 kubelet[3651]: E1216 12:46:26.030285 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:26.031649 kubelet[3651]: E1216 12:46:26.030770 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2g2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6ddd8d65c6-nlxv9_calico-system(64ea5eda-7471-4a46-a060-d20c3a27b031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:26.033096 kubelet[3651]: E1216 12:46:26.033028 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:46:26.566816 containerd[2018]: time="2025-12-16T12:46:26.566768498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb597c7bd-fs82z,Uid:f12e3f4c-f803-496f-aa7c-d8e02fdb59ff,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:26.567041 containerd[2018]: time="2025-12-16T12:46:26.567019577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b4c77bdf-sh2bd,Uid:31923b4e-d7e7-4360-b824-f299f181acf0,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:26.567095 containerd[2018]: time="2025-12-16T12:46:26.567077275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvxxz,Uid:2abb66c6-2c22-4302-a249-0480581958a4,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:26.752947 systemd-networkd[1715]: calif0927ac5353: Link UP Dec 16 12:46:26.754567 systemd-networkd[1715]: calif0927ac5353: Gained carrier Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.621 [INFO][5181] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.644 [INFO][5181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0 calico-apiserver-77b4c77bdf- calico-apiserver 31923b4e-d7e7-4360-b824-f299f181acf0 836 0 2025-12-16 12:45:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77b4c77bdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-6d618b7fe6 calico-apiserver-77b4c77bdf-sh2bd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0927ac5353 [] [] }} ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-sh2bd" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.644 [INFO][5181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-sh2bd" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.687 [INFO][5217] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" HandleID="k8s-pod-network.489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.687 [INFO][5217] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" HandleID="k8s-pod-network.489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-6d618b7fe6", "pod":"calico-apiserver-77b4c77bdf-sh2bd", "timestamp":"2025-12-16 12:46:26.687559421 +0000 UTC"}, Hostname:"ci-4515.1.0-a-6d618b7fe6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.687 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.687 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.687 [INFO][5217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-6d618b7fe6' Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.697 [INFO][5217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.705 [INFO][5217] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.715 [INFO][5217] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.718 [INFO][5217] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.721 [INFO][5217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.721 [INFO][5217] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.722 [INFO][5217] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.728 [INFO][5217] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.740 [INFO][5217] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.132/26] block=192.168.127.128/26 handle="k8s-pod-network.489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.740 [INFO][5217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.132/26] handle="k8s-pod-network.489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.741 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:26.776819 containerd[2018]: 2025-12-16 12:46:26.741 [INFO][5217] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.132/26] IPv6=[] ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" HandleID="k8s-pod-network.489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" Dec 16 12:46:26.777591 containerd[2018]: 2025-12-16 12:46:26.745 [INFO][5181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-sh2bd" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0", GenerateName:"calico-apiserver-77b4c77bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"31923b4e-d7e7-4360-b824-f299f181acf0", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b4c77bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"", Pod:"calico-apiserver-77b4c77bdf-sh2bd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0927ac5353", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:26.777591 containerd[2018]: 2025-12-16 12:46:26.745 [INFO][5181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.132/32] ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-sh2bd" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" Dec 16 12:46:26.777591 containerd[2018]: 2025-12-16 12:46:26.745 [INFO][5181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0927ac5353 ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-sh2bd" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" Dec 16 12:46:26.777591 containerd[2018]: 2025-12-16 12:46:26.756 [INFO][5181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-sh2bd" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" Dec 16 12:46:26.777591 containerd[2018]: 2025-12-16 12:46:26.757 [INFO][5181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-sh2bd" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0", GenerateName:"calico-apiserver-77b4c77bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"31923b4e-d7e7-4360-b824-f299f181acf0", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b4c77bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f", Pod:"calico-apiserver-77b4c77bdf-sh2bd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0927ac5353", MAC:"6a:74:43:7b:d1:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:26.777591 containerd[2018]: 2025-12-16 12:46:26.774 [INFO][5181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-sh2bd" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--sh2bd-eth0" Dec 16 12:46:26.835335 kubelet[3651]: E1216 12:46:26.834727 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:46:26.836292 containerd[2018]: time="2025-12-16T12:46:26.836248869Z" level=info msg="connecting to shim 489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f" address="unix:///run/containerd/s/e7af820504277f448d6eaac0709785eb8c3609193dd52e3dc86bf63bf0594082" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:26.879390 systemd[1]: Started cri-containerd-489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f.scope - libcontainer container 489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f. Dec 16 12:46:26.895314 systemd-networkd[1715]: cali2136dcfcc9d: Link UP Dec 16 12:46:26.902828 systemd-networkd[1715]: cali2136dcfcc9d: Gained carrier Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.620 [INFO][5177] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.638 [INFO][5177] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0 calico-apiserver-6fb597c7bd- calico-apiserver f12e3f4c-f803-496f-aa7c-d8e02fdb59ff 839 0 2025-12-16 12:45:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fb597c7bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-6d618b7fe6 calico-apiserver-6fb597c7bd-fs82z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2136dcfcc9d [] [] }} ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Namespace="calico-apiserver" Pod="calico-apiserver-6fb597c7bd-fs82z" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.638 [INFO][5177] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Namespace="calico-apiserver" Pod="calico-apiserver-6fb597c7bd-fs82z" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.694 [INFO][5215] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" HandleID="k8s-pod-network.a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.694 [INFO][5215] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" HandleID="k8s-pod-network.a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-6d618b7fe6", "pod":"calico-apiserver-6fb597c7bd-fs82z", "timestamp":"2025-12-16 12:46:26.694796046 +0000 UTC"}, Hostname:"ci-4515.1.0-a-6d618b7fe6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.695 [INFO][5215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.741 [INFO][5215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.741 [INFO][5215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-6d618b7fe6' Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.797 [INFO][5215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.804 [INFO][5215] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.818 [INFO][5215] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.820 [INFO][5215] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.826 [INFO][5215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.827 [INFO][5215] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.833 [INFO][5215] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05 Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.848 [INFO][5215] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.865 [INFO][5215] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.133/26] block=192.168.127.128/26 handle="k8s-pod-network.a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.865 [INFO][5215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.133/26] handle="k8s-pod-network.a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.865 [INFO][5215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:26.929012 containerd[2018]: 2025-12-16 12:46:26.865 [INFO][5215] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.133/26] IPv6=[] ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" HandleID="k8s-pod-network.a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" Dec 16 12:46:26.930503 containerd[2018]: 2025-12-16 12:46:26.873 [INFO][5177] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Namespace="calico-apiserver" Pod="calico-apiserver-6fb597c7bd-fs82z" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0", GenerateName:"calico-apiserver-6fb597c7bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f12e3f4c-f803-496f-aa7c-d8e02fdb59ff", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb597c7bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"", Pod:"calico-apiserver-6fb597c7bd-fs82z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2136dcfcc9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:26.930503 containerd[2018]: 2025-12-16 12:46:26.875 [INFO][5177] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.133/32] ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Namespace="calico-apiserver" Pod="calico-apiserver-6fb597c7bd-fs82z" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" Dec 16 12:46:26.930503 containerd[2018]: 2025-12-16 12:46:26.875 [INFO][5177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2136dcfcc9d ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Namespace="calico-apiserver" Pod="calico-apiserver-6fb597c7bd-fs82z" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" Dec 16 12:46:26.930503 containerd[2018]: 2025-12-16 12:46:26.905 [INFO][5177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Namespace="calico-apiserver" Pod="calico-apiserver-6fb597c7bd-fs82z" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" Dec 16 12:46:26.930503 containerd[2018]: 2025-12-16 12:46:26.905 [INFO][5177] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Namespace="calico-apiserver" Pod="calico-apiserver-6fb597c7bd-fs82z" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0", GenerateName:"calico-apiserver-6fb597c7bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f12e3f4c-f803-496f-aa7c-d8e02fdb59ff", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb597c7bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05", Pod:"calico-apiserver-6fb597c7bd-fs82z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2136dcfcc9d", MAC:"2e:a8:3b:d3:9c:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:26.930503 containerd[2018]: 2025-12-16 12:46:26.925 [INFO][5177] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" Namespace="calico-apiserver" Pod="calico-apiserver-6fb597c7bd-fs82z" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--6fb597c7bd--fs82z-eth0" Dec 16 12:46:26.936000 audit: BPF prog-id=218 op=LOAD Dec 16 12:46:26.938000 audit: BPF prog-id=219 op=LOAD Dec 16 12:46:26.938000 audit[5262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5252 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:26.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438393737326161656266346135656138646536636133643033343638 Dec 16 12:46:26.939000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:46:26.939000 audit[5262]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5252 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:26.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438393737326161656266346135656138646536636133643033343638 Dec 16 12:46:26.941000 audit: BPF prog-id=220 op=LOAD Dec 16 12:46:26.941000 audit[5262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5252 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:26.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438393737326161656266346135656138646536636133643033343638 Dec 16 12:46:26.942000 audit: BPF prog-id=221 op=LOAD Dec 16 12:46:26.942000 audit[5262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5252 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:26.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438393737326161656266346135656138646536636133643033343638 Dec 16 12:46:26.942000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:46:26.942000 audit[5262]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5252 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:26.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438393737326161656266346135656138646536636133643033343638 Dec 16 12:46:26.942000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:46:26.942000 audit[5262]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5252 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:26.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438393737326161656266346135656138646536636133643033343638 Dec 16 12:46:26.944000 audit: BPF prog-id=222 op=LOAD Dec 16 12:46:26.944000 audit[5262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5252 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:26.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438393737326161656266346135656138646536636133643033343638 Dec 16 12:46:27.000464 containerd[2018]: time="2025-12-16T12:46:27.000058920Z" level=info msg="connecting to shim a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05" address="unix:///run/containerd/s/e265f8590aa3d4da91dc4aa10421d6c3d32a30b3810ad695c4dd7fe8e2bea8c6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:27.003741 systemd-networkd[1715]: cali48d4e9a5b59: Link UP Dec 16 12:46:27.011304 systemd-networkd[1715]: cali48d4e9a5b59: Gained carrier Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.655 [INFO][5202] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.672 [INFO][5202] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0 coredns-674b8bbfcf- kube-system 2abb66c6-2c22-4302-a249-0480581958a4 835 0 2025-12-16 12:45:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-6d618b7fe6 coredns-674b8bbfcf-tvxxz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48d4e9a5b59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvxxz" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.672 [INFO][5202] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvxxz" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.717 [INFO][5229] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" HandleID="k8s-pod-network.8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.717 [INFO][5229] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" HandleID="k8s-pod-network.8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c0fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-6d618b7fe6", "pod":"coredns-674b8bbfcf-tvxxz", "timestamp":"2025-12-16 12:46:26.717240061 +0000 UTC"}, Hostname:"ci-4515.1.0-a-6d618b7fe6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.717 [INFO][5229] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.865 [INFO][5229] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.867 [INFO][5229] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-6d618b7fe6' Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.905 [INFO][5229] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.918 [INFO][5229] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.928 [INFO][5229] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.940 [INFO][5229] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.945 [INFO][5229] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.946 [INFO][5229] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.948 [INFO][5229] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.957 [INFO][5229] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.976 [INFO][5229] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.134/26] block=192.168.127.128/26 handle="k8s-pod-network.8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.976 [INFO][5229] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.134/26] handle="k8s-pod-network.8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.976 [INFO][5229] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:27.043583 containerd[2018]: 2025-12-16 12:46:26.976 [INFO][5229] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.134/26] IPv6=[] ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" HandleID="k8s-pod-network.8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" Dec 16 12:46:27.044300 containerd[2018]: 2025-12-16 12:46:26.983 [INFO][5202] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvxxz" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2abb66c6-2c22-4302-a249-0480581958a4", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"", Pod:"coredns-674b8bbfcf-tvxxz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48d4e9a5b59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:27.044300 containerd[2018]: 2025-12-16 12:46:26.985 [INFO][5202] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.134/32] ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvxxz" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" Dec 16 12:46:27.044300 containerd[2018]: 2025-12-16 12:46:26.986 [INFO][5202] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48d4e9a5b59 ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvxxz" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" Dec 16 12:46:27.044300 containerd[2018]: 2025-12-16 12:46:27.013 [INFO][5202] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvxxz" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" Dec 16 12:46:27.044300 containerd[2018]: 2025-12-16 12:46:27.018 [INFO][5202] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvxxz" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2abb66c6-2c22-4302-a249-0480581958a4", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f", Pod:"coredns-674b8bbfcf-tvxxz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48d4e9a5b59", MAC:"52:16:ae:c4:f5:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:27.044300 containerd[2018]: 2025-12-16 12:46:27.036 [INFO][5202] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvxxz" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-coredns--674b8bbfcf--tvxxz-eth0" Dec 16 12:46:27.049305 containerd[2018]: time="2025-12-16T12:46:27.049239275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b4c77bdf-sh2bd,Uid:31923b4e-d7e7-4360-b824-f299f181acf0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"489772aaebf4a5ea8de6ca3d03468c6810150d11d423305b5018741619bc005f\"" Dec 16 12:46:27.054909 containerd[2018]: time="2025-12-16T12:46:27.054431829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:27.095185 systemd[1]: Started cri-containerd-a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05.scope - libcontainer container a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05. Dec 16 12:46:27.112113 containerd[2018]: time="2025-12-16T12:46:27.111974729Z" level=info msg="connecting to shim 8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f" address="unix:///run/containerd/s/3869409b47ad9d445a4ef04dad774069e72e4d0eb830767ee585c57e494e057a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:27.120000 audit: BPF prog-id=223 op=LOAD Dec 16 12:46:27.121000 audit: BPF prog-id=224 op=LOAD Dec 16 12:46:27.121000 audit[5329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5300 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134333764626563353164613066666465366439356633653033623135 Dec 16 12:46:27.121000 audit: BPF prog-id=224 op=UNLOAD Dec 16 12:46:27.121000 audit[5329]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5300 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134333764626563353164613066666465366439356633653033623135 Dec 16 12:46:27.122000 audit: BPF prog-id=225 op=LOAD Dec 16 12:46:27.122000 audit[5329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5300 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134333764626563353164613066666465366439356633653033623135 Dec 16 12:46:27.122000 audit: BPF prog-id=226 op=LOAD Dec 16 12:46:27.122000 audit[5329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5300 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134333764626563353164613066666465366439356633653033623135 Dec 16 12:46:27.122000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:46:27.122000 audit[5329]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5300 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134333764626563353164613066666465366439356633653033623135 Dec 16 12:46:27.122000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:46:27.122000 audit[5329]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5300 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134333764626563353164613066666465366439356633653033623135 Dec 16 12:46:27.122000 audit: BPF prog-id=227 op=LOAD Dec 16 12:46:27.122000 audit[5329]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5300 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134333764626563353164613066666465366439356633653033623135 Dec 16 12:46:27.160270 systemd[1]: Started cri-containerd-8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f.scope - libcontainer container 8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f. Dec 16 12:46:27.171686 containerd[2018]: time="2025-12-16T12:46:27.171639181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb597c7bd-fs82z,Uid:f12e3f4c-f803-496f-aa7c-d8e02fdb59ff,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a437dbec51da0ffde6d95f3e03b15c69f45df8afaf7d18137c4931311c864d05\"" Dec 16 12:46:27.176000 audit: BPF prog-id=228 op=LOAD Dec 16 12:46:27.177000 audit: BPF prog-id=229 op=LOAD Dec 16 12:46:27.177000 audit[5369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5358 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383834363635396534376239623036353638656462366230663634 Dec 16 12:46:27.178000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:46:27.178000 audit[5369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5358 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383834363635396534376239623036353638656462366230663634 Dec 16 12:46:27.178000 audit: BPF prog-id=230 op=LOAD Dec 16 12:46:27.178000 audit[5369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5358 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383834363635396534376239623036353638656462366230663634 Dec 16 12:46:27.179000 audit: BPF prog-id=231 op=LOAD Dec 16 12:46:27.179000 audit[5369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5358 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383834363635396534376239623036353638656462366230663634 Dec 16 12:46:27.179000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:46:27.179000 audit[5369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5358 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383834363635396534376239623036353638656462366230663634 Dec 16 12:46:27.179000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:46:27.179000 audit[5369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5358 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383834363635396534376239623036353638656462366230663634 Dec 16 12:46:27.179000 audit: BPF prog-id=232 op=LOAD Dec 16 12:46:27.179000 audit[5369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5358 pid=5369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383834363635396534376239623036353638656462366230663634 Dec 16 12:46:27.207099 containerd[2018]: time="2025-12-16T12:46:27.207036983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvxxz,Uid:2abb66c6-2c22-4302-a249-0480581958a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f\"" Dec 16 12:46:27.216618 containerd[2018]: time="2025-12-16T12:46:27.216554411Z" level=info msg="CreateContainer within sandbox \"8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:46:27.233901 containerd[2018]: time="2025-12-16T12:46:27.233408465Z" level=info msg="Container a7eacad2c4889ed2b3d69646139d13917c9e4636a79033530502fedfb9b801af: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:27.244489 containerd[2018]: time="2025-12-16T12:46:27.244433869Z" level=info msg="CreateContainer within sandbox \"8e8846659e47b9b06568edb6b0f646d2ff386c1d259b40db9b36ac93917c500f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a7eacad2c4889ed2b3d69646139d13917c9e4636a79033530502fedfb9b801af\"" Dec 16 12:46:27.245731 containerd[2018]: time="2025-12-16T12:46:27.245685607Z" level=info msg="StartContainer for \"a7eacad2c4889ed2b3d69646139d13917c9e4636a79033530502fedfb9b801af\"" Dec 16 12:46:27.246891 containerd[2018]: time="2025-12-16T12:46:27.246849229Z" level=info msg="connecting to shim a7eacad2c4889ed2b3d69646139d13917c9e4636a79033530502fedfb9b801af" address="unix:///run/containerd/s/3869409b47ad9d445a4ef04dad774069e72e4d0eb830767ee585c57e494e057a" protocol=ttrpc version=3 Dec 16 12:46:27.275285 systemd[1]: Started cri-containerd-a7eacad2c4889ed2b3d69646139d13917c9e4636a79033530502fedfb9b801af.scope - libcontainer container a7eacad2c4889ed2b3d69646139d13917c9e4636a79033530502fedfb9b801af. Dec 16 12:46:27.284000 audit: BPF prog-id=233 op=LOAD Dec 16 12:46:27.285000 audit: BPF prog-id=234 op=LOAD Dec 16 12:46:27.285000 audit[5407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5358 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656163616432633438383965643262336436393634363133396431 Dec 16 12:46:27.285000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:46:27.285000 audit[5407]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5358 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656163616432633438383965643262336436393634363133396431 Dec 16 12:46:27.285000 audit: BPF prog-id=235 op=LOAD Dec 16 12:46:27.285000 audit[5407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5358 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656163616432633438383965643262336436393634363133396431 Dec 16 12:46:27.285000 audit: BPF prog-id=236 op=LOAD Dec 16 12:46:27.285000 audit[5407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5358 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656163616432633438383965643262336436393634363133396431 Dec 16 12:46:27.285000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:46:27.285000 audit[5407]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5358 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656163616432633438383965643262336436393634363133396431 Dec 16 12:46:27.285000 audit: BPF prog-id=235 op=UNLOAD Dec 16 12:46:27.285000 audit[5407]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5358 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656163616432633438383965643262336436393634363133396431 Dec 16 12:46:27.285000 audit: BPF prog-id=237 op=LOAD Dec 16 12:46:27.285000 audit[5407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5358 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137656163616432633438383965643262336436393634363133396431 Dec 16 12:46:27.308665 containerd[2018]: time="2025-12-16T12:46:27.308549656Z" level=info msg="StartContainer for \"a7eacad2c4889ed2b3d69646139d13917c9e4636a79033530502fedfb9b801af\" returns successfully" Dec 16 12:46:27.342209 containerd[2018]: time="2025-12-16T12:46:27.342153634Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:27.344915 containerd[2018]: time="2025-12-16T12:46:27.344850537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:27.345059 containerd[2018]: time="2025-12-16T12:46:27.344890242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:27.345519 kubelet[3651]: E1216 12:46:27.345277 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:27.345519 kubelet[3651]: E1216 12:46:27.345328 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:27.346305 kubelet[3651]: E1216 12:46:27.345939 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s9vv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77b4c77bdf-sh2bd_calico-apiserver(31923b4e-d7e7-4360-b824-f299f181acf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:27.346430 containerd[2018]: time="2025-12-16T12:46:27.346055177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:27.347803 kubelet[3651]: E1216 12:46:27.347662 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:46:27.467018 systemd-networkd[1715]: cali3fc4079d513: Gained IPv6LL Dec 16 12:46:27.566904 containerd[2018]: time="2025-12-16T12:46:27.566713573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b4c77bdf-4jfxv,Uid:e19615e9-eae1-4066-8da3-a07943f9e95e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:27.566904 containerd[2018]: time="2025-12-16T12:46:27.566826600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mj8nq,Uid:180cd658-ddf7-4444-81e2-acfbf19611d5,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:27.567305 containerd[2018]: time="2025-12-16T12:46:27.566729918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-llq4t,Uid:828880ea-211a-4230-af15-b5fa7bcbc734,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:27.624386 containerd[2018]: time="2025-12-16T12:46:27.624050564Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:27.629877 containerd[2018]: time="2025-12-16T12:46:27.629716778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:27.629877 containerd[2018]: time="2025-12-16T12:46:27.629816781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:27.630088 kubelet[3651]: E1216 12:46:27.630040 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:27.630166 kubelet[3651]: E1216 12:46:27.630099 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:27.631099 kubelet[3651]: E1216 12:46:27.630344 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjccj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb597c7bd-fs82z_calico-apiserver(f12e3f4c-f803-496f-aa7c-d8e02fdb59ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:27.631948 kubelet[3651]: E1216 12:46:27.631548 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:46:27.742708 systemd-networkd[1715]: cali5954f0f62a4: Link UP Dec 16 12:46:27.743342 systemd-networkd[1715]: cali5954f0f62a4: Gained carrier Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.633 [INFO][5448] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.649 [INFO][5448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0 csi-node-driver- calico-system 180cd658-ddf7-4444-81e2-acfbf19611d5 723 0 2025-12-16 12:46:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-a-6d618b7fe6 csi-node-driver-mj8nq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5954f0f62a4 [] [] }} ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Namespace="calico-system" Pod="csi-node-driver-mj8nq" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.649 [INFO][5448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Namespace="calico-system" Pod="csi-node-driver-mj8nq" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.689 [INFO][5478] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" HandleID="k8s-pod-network.f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.689 [INFO][5478] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" HandleID="k8s-pod-network.f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-6d618b7fe6", "pod":"csi-node-driver-mj8nq", "timestamp":"2025-12-16 12:46:27.689569243 +0000 UTC"}, Hostname:"ci-4515.1.0-a-6d618b7fe6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.689 [INFO][5478] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.690 [INFO][5478] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.690 [INFO][5478] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-6d618b7fe6' Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.700 [INFO][5478] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.708 [INFO][5478] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.712 [INFO][5478] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.714 [INFO][5478] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.717 [INFO][5478] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.717 [INFO][5478] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.720 [INFO][5478] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.726 [INFO][5478] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.736 [INFO][5478] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.135/26] block=192.168.127.128/26 handle="k8s-pod-network.f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.736 [INFO][5478] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.135/26] handle="k8s-pod-network.f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.736 [INFO][5478] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:27.762253 containerd[2018]: 2025-12-16 12:46:27.736 [INFO][5478] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.135/26] IPv6=[] ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" HandleID="k8s-pod-network.f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" Dec 16 12:46:27.762769 containerd[2018]: 2025-12-16 12:46:27.740 [INFO][5448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Namespace="calico-system" Pod="csi-node-driver-mj8nq" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"180cd658-ddf7-4444-81e2-acfbf19611d5", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"", Pod:"csi-node-driver-mj8nq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5954f0f62a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:27.762769 containerd[2018]: 2025-12-16 12:46:27.740 [INFO][5448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.135/32] ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Namespace="calico-system" Pod="csi-node-driver-mj8nq" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" Dec 16 12:46:27.762769 containerd[2018]: 2025-12-16 12:46:27.740 [INFO][5448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5954f0f62a4 ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Namespace="calico-system" Pod="csi-node-driver-mj8nq" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" Dec 16 12:46:27.762769 containerd[2018]: 2025-12-16 12:46:27.743 [INFO][5448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Namespace="calico-system" Pod="csi-node-driver-mj8nq" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" Dec 16 12:46:27.762769 containerd[2018]: 2025-12-16 12:46:27.744 [INFO][5448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Namespace="calico-system" Pod="csi-node-driver-mj8nq" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"180cd658-ddf7-4444-81e2-acfbf19611d5", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc", Pod:"csi-node-driver-mj8nq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5954f0f62a4", MAC:"fe:cf:0e:86:d0:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:27.762769 containerd[2018]: 2025-12-16 12:46:27.758 [INFO][5448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" Namespace="calico-system" Pod="csi-node-driver-mj8nq" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-csi--node--driver--mj8nq-eth0" Dec 16 12:46:27.844655 kubelet[3651]: E1216 12:46:27.844611 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:46:27.847651 kubelet[3651]: E1216 12:46:27.847600 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:46:27.864991 kubelet[3651]: I1216 12:46:27.863486 3651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tvxxz" podStartSLOduration=41.863464169 podStartE2EDuration="41.863464169s" podCreationTimestamp="2025-12-16 12:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:46:27.860533388 +0000 UTC m=+46.394236962" watchObservedRunningTime="2025-12-16 12:46:27.863464169 +0000 UTC m=+46.397167743" Dec 16 12:46:27.870570 systemd-networkd[1715]: cali94b8963a4af: Link UP Dec 16 12:46:27.873438 systemd-networkd[1715]: cali94b8963a4af: Gained carrier Dec 16 12:46:27.901784 containerd[2018]: time="2025-12-16T12:46:27.900204294Z" level=info msg="connecting to shim f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc" address="unix:///run/containerd/s/a0e4336ca988d4f5b4ec5946e0435e05a9090c35e12bf8047ac33289fb077cec" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.621 [INFO][5440] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.646 [INFO][5440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0 calico-apiserver-77b4c77bdf- calico-apiserver e19615e9-eae1-4066-8da3-a07943f9e95e 833 0 2025-12-16 12:45:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77b4c77bdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-6d618b7fe6 calico-apiserver-77b4c77bdf-4jfxv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali94b8963a4af [] [] }} ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-4jfxv" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.647 [INFO][5440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-4jfxv" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.704 [INFO][5479] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" HandleID="k8s-pod-network.0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.705 [INFO][5479] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" HandleID="k8s-pod-network.0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-6d618b7fe6", "pod":"calico-apiserver-77b4c77bdf-4jfxv", "timestamp":"2025-12-16 12:46:27.7046785 +0000 UTC"}, Hostname:"ci-4515.1.0-a-6d618b7fe6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.705 [INFO][5479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.736 [INFO][5479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.737 [INFO][5479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-6d618b7fe6' Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.800 [INFO][5479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.810 [INFO][5479] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.823 [INFO][5479] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.826 [INFO][5479] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.828 [INFO][5479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.828 [INFO][5479] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.830 [INFO][5479] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549 Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.838 [INFO][5479] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.850 [INFO][5479] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.136/26] block=192.168.127.128/26 handle="k8s-pod-network.0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.850 [INFO][5479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.136/26] handle="k8s-pod-network.0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.850 [INFO][5479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:27.906196 containerd[2018]: 2025-12-16 12:46:27.851 [INFO][5479] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.136/26] IPv6=[] ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" HandleID="k8s-pod-network.0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" Dec 16 12:46:27.906660 containerd[2018]: 2025-12-16 12:46:27.854 [INFO][5440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-4jfxv" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0", GenerateName:"calico-apiserver-77b4c77bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e19615e9-eae1-4066-8da3-a07943f9e95e", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b4c77bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"", Pod:"calico-apiserver-77b4c77bdf-4jfxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94b8963a4af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:27.906660 containerd[2018]: 2025-12-16 12:46:27.854 [INFO][5440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.136/32] ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-4jfxv" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" Dec 16 12:46:27.906660 containerd[2018]: 2025-12-16 12:46:27.854 [INFO][5440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94b8963a4af ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-4jfxv" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" Dec 16 12:46:27.906660 containerd[2018]: 2025-12-16 12:46:27.875 [INFO][5440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-4jfxv" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" Dec 16 12:46:27.906660 containerd[2018]: 2025-12-16 12:46:27.875 [INFO][5440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-4jfxv" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0", GenerateName:"calico-apiserver-77b4c77bdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e19615e9-eae1-4066-8da3-a07943f9e95e", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b4c77bdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549", Pod:"calico-apiserver-77b4c77bdf-4jfxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94b8963a4af", MAC:"0a:fd:60:2a:60:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:27.906660 containerd[2018]: 2025-12-16 12:46:27.899 [INFO][5440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" Namespace="calico-apiserver" Pod="calico-apiserver-77b4c77bdf-4jfxv" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-calico--apiserver--77b4c77bdf--4jfxv-eth0" Dec 16 12:46:27.953326 systemd[1]: Started cri-containerd-f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc.scope - libcontainer container f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc. Dec 16 12:46:27.971636 containerd[2018]: time="2025-12-16T12:46:27.970409554Z" level=info msg="connecting to shim 0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549" address="unix:///run/containerd/s/5df48576c59472ebdbf1e343501c6a6d458463b0752748b07d0b17f2dfb33cfd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:27.977000 audit[5550]: NETFILTER_CFG table=filter:128 family=2 entries=16 op=nft_register_rule pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:27.977000 audit[5550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffda2a5b20 a2=0 a3=1 items=0 ppid=3809 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:27.986000 audit[5550]: NETFILTER_CFG table=nat:129 family=2 entries=42 op=nft_register_rule pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:27.986000 audit[5550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffda2a5b20 a2=0 a3=1 items=0 ppid=3809 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:27.996000 audit: BPF prog-id=238 op=LOAD Dec 16 12:46:27.998000 audit: BPF prog-id=239 op=LOAD Dec 16 12:46:27.998000 audit[5532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5515 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633033613665396565336663346662333833636266643738326439 Dec 16 12:46:27.998000 audit: BPF prog-id=239 op=UNLOAD Dec 16 12:46:27.998000 audit[5532]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5515 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633033613665396565336663346662333833636266643738326439 Dec 16 12:46:27.998000 audit: BPF prog-id=240 op=LOAD Dec 16 12:46:27.998000 audit[5532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5515 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633033613665396565336663346662333833636266643738326439 Dec 16 12:46:27.998000 audit: BPF prog-id=241 op=LOAD Dec 16 12:46:27.998000 audit[5532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5515 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633033613665396565336663346662333833636266643738326439 Dec 16 12:46:27.998000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:46:27.998000 audit[5532]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5515 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633033613665396565336663346662333833636266643738326439 Dec 16 12:46:27.998000 audit: BPF prog-id=240 op=UNLOAD Dec 16 12:46:27.998000 audit[5532]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5515 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633033613665396565336663346662333833636266643738326439 Dec 16 12:46:27.998000 audit: BPF prog-id=242 op=LOAD Dec 16 12:46:27.998000 audit[5532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5515 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:27.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633033613665396565336663346662333833636266643738326439 Dec 16 12:46:28.007453 systemd[1]: Started cri-containerd-0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549.scope - libcontainer container 0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549. Dec 16 12:46:28.020000 audit[5583]: NETFILTER_CFG table=filter:130 family=2 entries=16 op=nft_register_rule pid=5583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:28.020000 audit[5583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd2397d10 a2=0 a3=1 items=0 ppid=3809 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:28.039189 containerd[2018]: time="2025-12-16T12:46:28.038564255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mj8nq,Uid:180cd658-ddf7-4444-81e2-acfbf19611d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2c03a6e9ee3fc4fb383cbfd782d9ca873df70101422e21891a362aacba265fc\"" Dec 16 12:46:28.041990 containerd[2018]: time="2025-12-16T12:46:28.041590943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:46:28.053000 audit[5583]: NETFILTER_CFG table=nat:131 family=2 entries=54 op=nft_register_chain pid=5583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:28.053000 audit[5583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19092 a0=3 a1=ffffd2397d10 a2=0 a3=1 items=0 ppid=3809 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:28.059000 audit: BPF prog-id=243 op=LOAD Dec 16 12:46:28.060000 audit: BPF prog-id=244 op=LOAD Dec 16 12:46:28.060000 audit[5571]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5555 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064623938343435336530373433653437663336383262636365383733 Dec 16 12:46:28.060000 audit: BPF prog-id=244 op=UNLOAD Dec 16 12:46:28.060000 audit[5571]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5555 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064623938343435336530373433653437663336383262636365383733 Dec 16 12:46:28.060000 audit: BPF prog-id=245 op=LOAD Dec 16 12:46:28.060000 audit[5571]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5555 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064623938343435336530373433653437663336383262636365383733 Dec 16 12:46:28.060000 audit: BPF prog-id=246 op=LOAD Dec 16 12:46:28.060000 audit[5571]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5555 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064623938343435336530373433653437663336383262636365383733 Dec 16 12:46:28.060000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:46:28.060000 audit[5571]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5555 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064623938343435336530373433653437663336383262636365383733 Dec 16 12:46:28.060000 audit: BPF prog-id=245 op=UNLOAD Dec 16 12:46:28.060000 audit[5571]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5555 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064623938343435336530373433653437663336383262636365383733 Dec 16 12:46:28.061000 audit: BPF prog-id=247 op=LOAD Dec 16 12:46:28.061000 audit[5571]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5555 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064623938343435336530373433653437663336383262636365383733 Dec 16 12:46:28.076451 systemd-networkd[1715]: calicaa662a8056: Link UP Dec 16 12:46:28.078541 systemd-networkd[1715]: calicaa662a8056: Gained carrier Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:27.667 [INFO][5461] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:27.684 [INFO][5461] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0 goldmane-666569f655- calico-system 828880ea-211a-4230-af15-b5fa7bcbc734 834 0 2025-12-16 12:45:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-a-6d618b7fe6 goldmane-666569f655-llq4t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicaa662a8056 [] [] }} ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Namespace="calico-system" Pod="goldmane-666569f655-llq4t" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:27.684 [INFO][5461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Namespace="calico-system" Pod="goldmane-666569f655-llq4t" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:27.728 [INFO][5493] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" HandleID="k8s-pod-network.4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:27.728 [INFO][5493] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" HandleID="k8s-pod-network.4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2f70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-6d618b7fe6", "pod":"goldmane-666569f655-llq4t", "timestamp":"2025-12-16 12:46:27.728132193 +0000 UTC"}, Hostname:"ci-4515.1.0-a-6d618b7fe6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:27.728 [INFO][5493] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:27.850 [INFO][5493] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:27.852 [INFO][5493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-6d618b7fe6' Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:27.907 [INFO][5493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.012 [INFO][5493] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.023 [INFO][5493] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.027 [INFO][5493] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.031 [INFO][5493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.033 [INFO][5493] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.036 [INFO][5493] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318 Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.053 [INFO][5493] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.065 [INFO][5493] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.137/26] block=192.168.127.128/26 handle="k8s-pod-network.4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.065 [INFO][5493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.137/26] handle="k8s-pod-network.4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" host="ci-4515.1.0-a-6d618b7fe6" Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.066 [INFO][5493] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:28.103213 containerd[2018]: 2025-12-16 12:46:28.066 [INFO][5493] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.137/26] IPv6=[] ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" HandleID="k8s-pod-network.4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Workload="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" Dec 16 12:46:28.104187 containerd[2018]: 2025-12-16 12:46:28.071 [INFO][5461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Namespace="calico-system" Pod="goldmane-666569f655-llq4t" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"828880ea-211a-4230-af15-b5fa7bcbc734", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"", Pod:"goldmane-666569f655-llq4t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicaa662a8056", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:28.104187 containerd[2018]: 2025-12-16 12:46:28.071 [INFO][5461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.137/32] ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Namespace="calico-system" Pod="goldmane-666569f655-llq4t" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" Dec 16 12:46:28.104187 containerd[2018]: 2025-12-16 12:46:28.071 [INFO][5461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicaa662a8056 ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Namespace="calico-system" Pod="goldmane-666569f655-llq4t" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" Dec 16 12:46:28.104187 containerd[2018]: 2025-12-16 12:46:28.078 [INFO][5461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Namespace="calico-system" Pod="goldmane-666569f655-llq4t" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" Dec 16 12:46:28.104187 containerd[2018]: 2025-12-16 12:46:28.083 [INFO][5461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Namespace="calico-system" Pod="goldmane-666569f655-llq4t" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"828880ea-211a-4230-af15-b5fa7bcbc734", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-6d618b7fe6", ContainerID:"4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318", Pod:"goldmane-666569f655-llq4t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicaa662a8056", MAC:"42:c6:d7:ba:e1:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:28.104187 containerd[2018]: 2025-12-16 12:46:28.096 [INFO][5461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" Namespace="calico-system" Pod="goldmane-666569f655-llq4t" WorkloadEndpoint="ci--4515.1.0--a--6d618b7fe6-k8s-goldmane--666569f655--llq4t-eth0" Dec 16 12:46:28.107997 containerd[2018]: time="2025-12-16T12:46:28.107948053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b4c77bdf-4jfxv,Uid:e19615e9-eae1-4066-8da3-a07943f9e95e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0db984453e0743e47f3682bcce8734b8cda72d5e7cebefc789f96e3d08729549\"" Dec 16 12:46:28.143949 containerd[2018]: time="2025-12-16T12:46:28.143894733Z" level=info msg="connecting to shim 4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318" address="unix:///run/containerd/s/914a584e663113ecd59172e786c601398e33df9e8a9152e673d8cfe66bc8944d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:28.177109 systemd[1]: Started cri-containerd-4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318.scope - libcontainer container 4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318. Dec 16 12:46:28.187000 audit: BPF prog-id=248 op=LOAD Dec 16 12:46:28.187000 audit: BPF prog-id=249 op=LOAD Dec 16 12:46:28.187000 audit[5637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=5625 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313537323262616566356137366331303264663437323039373764 Dec 16 12:46:28.187000 audit: BPF prog-id=249 op=UNLOAD Dec 16 12:46:28.187000 audit[5637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5625 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313537323262616566356137366331303264663437323039373764 Dec 16 12:46:28.187000 audit: BPF prog-id=250 op=LOAD Dec 16 12:46:28.187000 audit[5637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=5625 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313537323262616566356137366331303264663437323039373764 Dec 16 12:46:28.188000 audit: BPF prog-id=251 op=LOAD Dec 16 12:46:28.188000 audit[5637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=5625 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313537323262616566356137366331303264663437323039373764 Dec 16 12:46:28.188000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:46:28.188000 audit[5637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5625 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313537323262616566356137366331303264663437323039373764 Dec 16 12:46:28.188000 audit: BPF prog-id=250 op=UNLOAD Dec 16 12:46:28.188000 audit[5637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5625 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313537323262616566356137366331303264663437323039373764 Dec 16 12:46:28.188000 audit: BPF prog-id=252 op=LOAD Dec 16 12:46:28.188000 audit[5637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=5625 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437313537323262616566356137366331303264663437323039373764 Dec 16 12:46:28.221741 containerd[2018]: time="2025-12-16T12:46:28.221691402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-llq4t,Uid:828880ea-211a-4230-af15-b5fa7bcbc734,Namespace:calico-system,Attempt:0,} returns sandbox id \"4715722baef5a76c102df4720977d1e870b415dbdb2c34a550df0855aff5d318\"" Dec 16 12:46:28.285605 containerd[2018]: time="2025-12-16T12:46:28.285549933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:28.288570 containerd[2018]: time="2025-12-16T12:46:28.288510427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:46:28.288693 containerd[2018]: time="2025-12-16T12:46:28.288610430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:28.289882 kubelet[3651]: E1216 12:46:28.288840 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:28.289882 kubelet[3651]: E1216 12:46:28.288921 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:28.289882 kubelet[3651]: E1216 12:46:28.289539 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp5p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:28.290118 containerd[2018]: time="2025-12-16T12:46:28.289296544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:28.555110 systemd-networkd[1715]: calif0927ac5353: Gained IPv6LL Dec 16 12:46:28.566249 containerd[2018]: time="2025-12-16T12:46:28.566033658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:28.568853 containerd[2018]: time="2025-12-16T12:46:28.568707201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:28.568853 containerd[2018]: time="2025-12-16T12:46:28.568803892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:28.569071 kubelet[3651]: E1216 12:46:28.569008 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:28.569349 kubelet[3651]: E1216 12:46:28.569078 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:28.569726 containerd[2018]: time="2025-12-16T12:46:28.569655746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:46:28.569952 kubelet[3651]: E1216 12:46:28.569571 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9zgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77b4c77bdf-4jfxv_calico-apiserver(e19615e9-eae1-4066-8da3-a07943f9e95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:28.571990 kubelet[3651]: E1216 12:46:28.571933 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:46:28.674080 kubelet[3651]: I1216 12:46:28.673802 3651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:46:28.747093 systemd-networkd[1715]: cali2136dcfcc9d: Gained IPv6LL Dec 16 12:46:28.800425 containerd[2018]: time="2025-12-16T12:46:28.800361505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:28.803259 containerd[2018]: time="2025-12-16T12:46:28.803213740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:46:28.803346 containerd[2018]: time="2025-12-16T12:46:28.803303319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:28.803553 kubelet[3651]: E1216 12:46:28.803508 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:28.803618 kubelet[3651]: E1216 12:46:28.803567 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:28.804209 kubelet[3651]: E1216 12:46:28.803975 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjvdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-llq4t_calico-system(828880ea-211a-4230-af15-b5fa7bcbc734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:28.805401 containerd[2018]: time="2025-12-16T12:46:28.804401388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:46:28.805474 kubelet[3651]: E1216 12:46:28.805358 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:46:28.812139 systemd-networkd[1715]: cali48d4e9a5b59: Gained IPv6LL Dec 16 12:46:28.851907 kubelet[3651]: E1216 12:46:28.851840 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:46:28.854143 kubelet[3651]: E1216 12:46:28.854113 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:46:28.854369 kubelet[3651]: E1216 12:46:28.854137 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:46:28.854517 kubelet[3651]: E1216 12:46:28.854178 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:46:28.917000 audit[5678]: NETFILTER_CFG table=filter:132 family=2 entries=15 op=nft_register_rule pid=5678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:28.923485 kernel: kauditd_printk_skb: 234 callbacks suppressed Dec 16 12:46:28.923614 kernel: audit: type=1325 audit(1765889188.917:687): table=filter:132 family=2 entries=15 op=nft_register_rule pid=5678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:28.917000 audit[5678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc658c860 a2=0 a3=1 items=0 ppid=3809 pid=5678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.954396 kernel: audit: type=1300 audit(1765889188.917:687): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc658c860 a2=0 a3=1 items=0 ppid=3809 pid=5678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.917000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:28.966926 kernel: audit: type=1327 audit(1765889188.917:687): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:28.955000 audit[5678]: NETFILTER_CFG table=nat:133 family=2 entries=25 op=nft_register_chain pid=5678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:28.977792 kernel: audit: type=1325 audit(1765889188.955:688): table=nat:133 family=2 entries=25 op=nft_register_chain pid=5678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:28.955000 audit[5678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffc658c860 a2=0 a3=1 items=0 ppid=3809 pid=5678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:28.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:29.008634 kernel: audit: type=1300 audit(1765889188.955:688): arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffc658c860 a2=0 a3=1 items=0 ppid=3809 pid=5678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.008771 kernel: audit: type=1327 audit(1765889188.955:688): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:29.051488 containerd[2018]: time="2025-12-16T12:46:29.051271614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:29.055182 containerd[2018]: time="2025-12-16T12:46:29.055031330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:46:29.055182 containerd[2018]: time="2025-12-16T12:46:29.055075955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:29.055771 kubelet[3651]: E1216 12:46:29.055355 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:29.055771 kubelet[3651]: E1216 12:46:29.055678 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:29.056093 kubelet[3651]: E1216 12:46:29.056054 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp5p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:29.057564 kubelet[3651]: E1216 12:46:29.057503 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:29.131153 systemd-networkd[1715]: cali94b8963a4af: Gained IPv6LL Dec 16 12:46:29.147000 audit: BPF prog-id=253 op=LOAD Dec 16 12:46:29.147000 audit[5695]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec0e4958 a2=98 a3=ffffec0e4948 items=0 ppid=5679 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.177178 kernel: audit: type=1334 audit(1765889189.147:689): prog-id=253 op=LOAD Dec 16 12:46:29.177324 kernel: audit: type=1300 audit(1765889189.147:689): arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec0e4958 a2=98 a3=ffffec0e4948 items=0 ppid=5679 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.147000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:29.196494 kernel: audit: type=1327 audit(1765889189.147:689): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:29.197070 systemd-networkd[1715]: calicaa662a8056: Gained IPv6LL Dec 16 12:46:29.147000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:46:29.202538 kernel: audit: type=1334 audit(1765889189.147:690): prog-id=253 op=UNLOAD Dec 16 12:46:29.147000 audit[5695]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffec0e4928 a3=0 items=0 ppid=5679 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.147000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:29.147000 audit: BPF prog-id=254 op=LOAD Dec 16 12:46:29.147000 audit[5695]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec0e4808 a2=74 a3=95 items=0 ppid=5679 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.147000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:29.147000 audit: BPF prog-id=254 op=UNLOAD Dec 16 12:46:29.147000 audit[5695]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5679 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.147000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:29.147000 audit: BPF prog-id=255 op=LOAD Dec 16 12:46:29.147000 audit[5695]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec0e4838 a2=40 a3=ffffec0e4868 items=0 ppid=5679 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.147000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:29.147000 audit: BPF prog-id=255 op=UNLOAD Dec 16 12:46:29.147000 audit[5695]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffec0e4868 items=0 ppid=5679 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.147000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:29.154000 audit: BPF prog-id=256 op=LOAD Dec 16 12:46:29.154000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3139488 a2=98 a3=fffff3139478 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.154000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.177000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:46:29.177000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff3139458 a3=0 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.177000 audit: BPF prog-id=257 op=LOAD Dec 16 12:46:29.177000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3139118 a2=74 a3=95 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.201000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:46:29.201000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.201000 audit: BPF prog-id=258 op=LOAD Dec 16 12:46:29.201000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3139178 a2=94 a3=2 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.202000 audit: BPF prog-id=258 op=UNLOAD Dec 16 12:46:29.202000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.326000 audit: BPF prog-id=259 op=LOAD Dec 16 12:46:29.326000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3139138 a2=40 a3=fffff3139168 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.326000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.327000 audit: BPF prog-id=259 op=UNLOAD Dec 16 12:46:29.327000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff3139168 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.327000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.334000 audit: BPF prog-id=260 op=LOAD Dec 16 12:46:29.334000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff3139148 a2=94 a3=4 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.334000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.334000 audit: BPF prog-id=260 op=UNLOAD Dec 16 12:46:29.334000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.334000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.335000 audit: BPF prog-id=261 op=LOAD Dec 16 12:46:29.335000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff3138f88 a2=94 a3=5 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.335000 audit: BPF prog-id=261 op=UNLOAD Dec 16 12:46:29.335000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.335000 audit: BPF prog-id=262 op=LOAD Dec 16 12:46:29.335000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff31391b8 a2=94 a3=6 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.335000 audit: BPF prog-id=262 op=UNLOAD Dec 16 12:46:29.335000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.335000 audit: BPF prog-id=263 op=LOAD Dec 16 12:46:29.335000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff3138988 a2=94 a3=83 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.336000 audit: BPF prog-id=264 op=LOAD Dec 16 12:46:29.336000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff3138748 a2=94 a3=2 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.336000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.336000 audit: BPF prog-id=264 op=UNLOAD Dec 16 12:46:29.336000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.336000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.337000 audit: BPF prog-id=263 op=UNLOAD Dec 16 12:46:29.337000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=141df620 a3=141d2b00 items=0 ppid=5679 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:29.348000 audit: BPF prog-id=265 op=LOAD Dec 16 12:46:29.348000 audit[5711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3633568 a2=98 a3=ffffe3633558 items=0 ppid=5679 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:29.348000 audit: BPF prog-id=265 op=UNLOAD Dec 16 12:46:29.348000 audit[5711]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe3633538 a3=0 items=0 ppid=5679 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:29.348000 audit: BPF prog-id=266 op=LOAD Dec 16 12:46:29.348000 audit[5711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3633418 a2=74 a3=95 items=0 ppid=5679 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:29.348000 audit: BPF prog-id=266 op=UNLOAD Dec 16 12:46:29.348000 audit[5711]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5679 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:29.348000 audit: BPF prog-id=267 op=LOAD Dec 16 12:46:29.348000 audit[5711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3633448 a2=40 a3=ffffe3633478 items=0 ppid=5679 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:29.348000 audit: BPF prog-id=267 op=UNLOAD Dec 16 12:46:29.348000 audit[5711]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe3633478 items=0 ppid=5679 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:29.387301 systemd-networkd[1715]: cali5954f0f62a4: Gained IPv6LL Dec 16 12:46:29.620147 systemd-networkd[1715]: vxlan.calico: Link UP Dec 16 12:46:29.620157 systemd-networkd[1715]: vxlan.calico: Gained carrier Dec 16 12:46:29.650000 audit: BPF prog-id=268 op=LOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd28950b8 a2=98 a3=ffffd28950a8 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.650000 audit: BPF prog-id=268 op=UNLOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd2895088 a3=0 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.650000 audit: BPF prog-id=269 op=LOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd2894d98 a2=74 a3=95 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.650000 audit: BPF prog-id=269 op=UNLOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.650000 audit: BPF prog-id=270 op=LOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd2894df8 a2=94 a3=2 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.650000 audit: BPF prog-id=270 op=UNLOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.650000 audit: BPF prog-id=271 op=LOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd2894c78 a2=40 a3=ffffd2894ca8 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.650000 audit: BPF prog-id=271 op=UNLOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd2894ca8 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.650000 audit: BPF prog-id=272 op=LOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd2894dc8 a2=94 a3=b7 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.650000 audit: BPF prog-id=272 op=UNLOAD Dec 16 12:46:29.650000 audit[5763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.653000 audit: BPF prog-id=273 op=LOAD Dec 16 12:46:29.653000 audit[5763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd2894478 a2=94 a3=2 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.653000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.653000 audit: BPF prog-id=273 op=UNLOAD Dec 16 12:46:29.653000 audit[5763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.653000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.653000 audit: BPF prog-id=274 op=LOAD Dec 16 12:46:29.653000 audit[5763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd2894608 a2=94 a3=30 items=0 ppid=5679 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.653000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:29.663000 audit: BPF prog-id=275 op=LOAD Dec 16 12:46:29.663000 audit[5768]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc4cd8528 a2=98 a3=ffffc4cd8518 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.663000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.663000 audit: BPF prog-id=275 op=UNLOAD Dec 16 12:46:29.663000 audit[5768]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc4cd84f8 a3=0 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.663000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.663000 audit: BPF prog-id=276 op=LOAD Dec 16 12:46:29.663000 audit[5768]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc4cd81b8 a2=74 a3=95 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.663000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.663000 audit: BPF prog-id=276 op=UNLOAD Dec 16 12:46:29.663000 audit[5768]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.663000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.663000 audit: BPF prog-id=277 op=LOAD Dec 16 12:46:29.663000 audit[5768]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc4cd8218 a2=94 a3=2 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.663000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.663000 audit: BPF prog-id=277 op=UNLOAD Dec 16 12:46:29.663000 audit[5768]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.663000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.749000 audit: BPF prog-id=278 op=LOAD Dec 16 12:46:29.749000 audit[5768]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc4cd81d8 a2=40 a3=ffffc4cd8208 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.749000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.749000 audit: BPF prog-id=278 op=UNLOAD Dec 16 12:46:29.749000 audit[5768]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc4cd8208 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.749000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.756000 audit: BPF prog-id=279 op=LOAD Dec 16 12:46:29.756000 audit[5768]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc4cd81e8 a2=94 a3=4 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.756000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.756000 audit: BPF prog-id=279 op=UNLOAD Dec 16 12:46:29.756000 audit[5768]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.756000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.756000 audit: BPF prog-id=280 op=LOAD Dec 16 12:46:29.756000 audit[5768]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc4cd8028 a2=94 a3=5 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.756000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.756000 audit: BPF prog-id=280 op=UNLOAD Dec 16 12:46:29.756000 audit[5768]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.756000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.756000 audit: BPF prog-id=281 op=LOAD Dec 16 12:46:29.756000 audit[5768]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc4cd8258 a2=94 a3=6 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.756000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.756000 audit: BPF prog-id=281 op=UNLOAD Dec 16 12:46:29.756000 audit[5768]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.756000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.756000 audit: BPF prog-id=282 op=LOAD Dec 16 12:46:29.756000 audit[5768]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc4cd7a28 a2=94 a3=83 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.756000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.756000 audit: BPF prog-id=283 op=LOAD Dec 16 12:46:29.756000 audit[5768]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc4cd77e8 a2=94 a3=2 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.756000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.757000 audit: BPF prog-id=283 op=UNLOAD Dec 16 12:46:29.757000 audit[5768]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.757000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.757000 audit: BPF prog-id=282 op=UNLOAD Dec 16 12:46:29.757000 audit[5768]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1f536620 a3=1f529b00 items=0 ppid=5679 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.757000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:29.761000 audit: BPF prog-id=274 op=UNLOAD Dec 16 12:46:29.761000 audit[5679]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=400121a140 a2=0 a3=0 items=0 ppid=4745 pid=5679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.761000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:46:29.856782 kubelet[3651]: E1216 12:46:29.856476 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:46:29.857730 kubelet[3651]: E1216 12:46:29.857149 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:46:29.857730 kubelet[3651]: E1216 12:46:29.857315 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:29.866000 audit[5799]: NETFILTER_CFG table=nat:134 family=2 entries=15 op=nft_register_chain pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:29.866000 audit[5799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffcfd89810 a2=0 a3=ffffa31b1fa8 items=0 ppid=5679 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.866000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:29.879000 audit[5798]: NETFILTER_CFG table=raw:135 family=2 entries=21 op=nft_register_chain pid=5798 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:29.879000 audit[5798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff50461f0 a2=0 a3=ffffa3582fa8 items=0 ppid=5679 pid=5798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.879000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:29.892000 audit[5800]: NETFILTER_CFG table=mangle:136 family=2 entries=16 op=nft_register_chain pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:29.892000 audit[5800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffceed26a0 a2=0 a3=ffff94e49fa8 items=0 ppid=5679 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.892000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:29.921000 audit[5805]: NETFILTER_CFG table=filter:137 family=2 entries=350 op=nft_register_chain pid=5805 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:29.921000 audit[5805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=209604 a0=3 a1=fffffc638440 a2=0 a3=ffff9cc98fa8 items=0 ppid=5679 pid=5805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:29.921000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:31.307067 systemd-networkd[1715]: vxlan.calico: Gained IPv6LL Dec 16 12:46:33.568161 containerd[2018]: time="2025-12-16T12:46:33.568095547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:46:33.867794 containerd[2018]: time="2025-12-16T12:46:33.867656019Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:33.870368 containerd[2018]: time="2025-12-16T12:46:33.870305501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:46:33.870581 containerd[2018]: time="2025-12-16T12:46:33.870312925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:33.870615 kubelet[3651]: E1216 12:46:33.870580 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:33.871038 kubelet[3651]: E1216 12:46:33.870629 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:33.871038 kubelet[3651]: E1216 12:46:33.870743 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a79f5c30704f47cba759fbc41fd0b2a3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-454zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ddcd6787d-54cfx_calico-system(12f0bf61-64a1-4c2f-bbd3-977cfb8492eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:33.873679 containerd[2018]: time="2025-12-16T12:46:33.873604795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:46:34.123222 containerd[2018]: time="2025-12-16T12:46:34.122914869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:34.125724 containerd[2018]: time="2025-12-16T12:46:34.125652305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:46:34.126000 containerd[2018]: time="2025-12-16T12:46:34.125676898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:34.126379 kubelet[3651]: E1216 12:46:34.126105 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:34.126379 kubelet[3651]: E1216 12:46:34.126158 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:34.126379 kubelet[3651]: E1216 12:46:34.126297 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ddcd6787d-54cfx_calico-system(12f0bf61-64a1-4c2f-bbd3-977cfb8492eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:34.127894 kubelet[3651]: E1216 12:46:34.127825 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:46:41.569798 containerd[2018]: time="2025-12-16T12:46:41.569751158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:46:41.810657 containerd[2018]: time="2025-12-16T12:46:41.810598060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:41.818906 containerd[2018]: time="2025-12-16T12:46:41.818794867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:46:41.818906 containerd[2018]: time="2025-12-16T12:46:41.818843260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:41.819108 kubelet[3651]: E1216 12:46:41.819066 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:41.819586 kubelet[3651]: E1216 12:46:41.819119 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:41.819609 containerd[2018]: time="2025-12-16T12:46:41.819382613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:46:41.819777 kubelet[3651]: E1216 12:46:41.819665 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp5p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:42.072963 containerd[2018]: time="2025-12-16T12:46:42.072915021Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:42.075972 containerd[2018]: time="2025-12-16T12:46:42.075915723Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:46:42.076188 containerd[2018]: time="2025-12-16T12:46:42.075951908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:42.076216 kubelet[3651]: E1216 12:46:42.076146 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:42.076216 kubelet[3651]: E1216 12:46:42.076207 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:42.076507 kubelet[3651]: E1216 12:46:42.076420 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjvdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-llq4t_calico-system(828880ea-211a-4230-af15-b5fa7bcbc734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:42.076616 containerd[2018]: time="2025-12-16T12:46:42.076579559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:46:42.077985 kubelet[3651]: E1216 12:46:42.077927 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:46:42.354443 containerd[2018]: time="2025-12-16T12:46:42.354202973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:42.357477 containerd[2018]: time="2025-12-16T12:46:42.357395168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:42.357477 containerd[2018]: time="2025-12-16T12:46:42.357397680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:46:42.357864 kubelet[3651]: E1216 12:46:42.357810 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:42.358057 kubelet[3651]: E1216 12:46:42.357890 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:42.358222 kubelet[3651]: E1216 12:46:42.358165 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2g2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6ddd8d65c6-nlxv9_calico-system(64ea5eda-7471-4a46-a060-d20c3a27b031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:42.358684 containerd[2018]: time="2025-12-16T12:46:42.358652471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:42.360009 kubelet[3651]: E1216 12:46:42.359947 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:46:42.618666 containerd[2018]: time="2025-12-16T12:46:42.618515756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:42.622002 containerd[2018]: time="2025-12-16T12:46:42.621921238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:42.622119 containerd[2018]: time="2025-12-16T12:46:42.621923638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:42.622265 kubelet[3651]: E1216 12:46:42.622225 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:42.622348 kubelet[3651]: E1216 12:46:42.622276 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:42.622598 containerd[2018]: time="2025-12-16T12:46:42.622536121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:46:42.622700 kubelet[3651]: E1216 12:46:42.622585 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9zgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77b4c77bdf-4jfxv_calico-apiserver(e19615e9-eae1-4066-8da3-a07943f9e95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:42.623951 kubelet[3651]: E1216 12:46:42.623902 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:46:42.872212 containerd[2018]: time="2025-12-16T12:46:42.872073917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:42.875965 containerd[2018]: time="2025-12-16T12:46:42.875904100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:46:42.876104 containerd[2018]: time="2025-12-16T12:46:42.875902924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:42.876303 kubelet[3651]: E1216 12:46:42.876261 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:42.876940 kubelet[3651]: E1216 12:46:42.876656 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:42.877748 kubelet[3651]: E1216 12:46:42.877652 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp5p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:42.877902 containerd[2018]: time="2025-12-16T12:46:42.877802095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:42.880887 kubelet[3651]: E1216 12:46:42.880728 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:43.156968 containerd[2018]: time="2025-12-16T12:46:43.156663022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:43.160904 containerd[2018]: time="2025-12-16T12:46:43.160812955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:43.161087 containerd[2018]: time="2025-12-16T12:46:43.160829788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:43.161158 kubelet[3651]: E1216 12:46:43.161115 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:43.161200 kubelet[3651]: E1216 12:46:43.161172 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:43.161339 kubelet[3651]: E1216 12:46:43.161307 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s9vv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77b4c77bdf-sh2bd_calico-apiserver(31923b4e-d7e7-4360-b824-f299f181acf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:43.162940 kubelet[3651]: E1216 12:46:43.162894 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:46:44.567353 containerd[2018]: time="2025-12-16T12:46:44.567227303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:44.858173 containerd[2018]: time="2025-12-16T12:46:44.857859414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:44.860387 containerd[2018]: time="2025-12-16T12:46:44.860268250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:44.860387 containerd[2018]: time="2025-12-16T12:46:44.860312507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:44.860578 kubelet[3651]: E1216 12:46:44.860518 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:44.860861 kubelet[3651]: E1216 12:46:44.860582 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:44.861660 kubelet[3651]: E1216 12:46:44.861115 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjccj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb597c7bd-fs82z_calico-apiserver(f12e3f4c-f803-496f-aa7c-d8e02fdb59ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:44.862428 kubelet[3651]: E1216 12:46:44.862316 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:46:47.569668 kubelet[3651]: E1216 12:46:47.569244 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:46:53.569294 kubelet[3651]: E1216 12:46:53.569161 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:46:53.571234 kubelet[3651]: E1216 12:46:53.571149 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:46:54.568506 kubelet[3651]: E1216 12:46:54.568453 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:46:55.567580 kubelet[3651]: E1216 12:46:55.567470 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:46:56.567372 kubelet[3651]: E1216 12:46:56.567048 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:46:58.568166 kubelet[3651]: E1216 12:46:58.568010 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:47:00.569344 containerd[2018]: time="2025-12-16T12:47:00.569071618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:47:00.836116 containerd[2018]: time="2025-12-16T12:47:00.835929434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:00.839601 containerd[2018]: time="2025-12-16T12:47:00.839506397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:47:00.839601 containerd[2018]: time="2025-12-16T12:47:00.839550734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:00.839815 kubelet[3651]: E1216 12:47:00.839767 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:00.840140 kubelet[3651]: E1216 12:47:00.839826 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:00.843366 kubelet[3651]: E1216 12:47:00.843294 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a79f5c30704f47cba759fbc41fd0b2a3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-454zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ddcd6787d-54cfx_calico-system(12f0bf61-64a1-4c2f-bbd3-977cfb8492eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:00.846785 containerd[2018]: time="2025-12-16T12:47:00.846745934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:47:01.153944 containerd[2018]: time="2025-12-16T12:47:01.153637900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:01.156887 containerd[2018]: time="2025-12-16T12:47:01.156747329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:47:01.156887 containerd[2018]: time="2025-12-16T12:47:01.156801090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:01.157055 kubelet[3651]: E1216 12:47:01.157009 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:01.157090 kubelet[3651]: E1216 12:47:01.157059 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:01.157207 kubelet[3651]: E1216 12:47:01.157171 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ddcd6787d-54cfx_calico-system(12f0bf61-64a1-4c2f-bbd3-977cfb8492eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:01.158724 kubelet[3651]: E1216 12:47:01.158636 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:47:04.569203 containerd[2018]: time="2025-12-16T12:47:04.569068553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:04.838349 containerd[2018]: time="2025-12-16T12:47:04.838209724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:04.845080 containerd[2018]: time="2025-12-16T12:47:04.845017488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:04.845387 containerd[2018]: time="2025-12-16T12:47:04.845161996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:04.845684 kubelet[3651]: E1216 12:47:04.845317 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:04.845684 kubelet[3651]: E1216 12:47:04.845366 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:04.848511 kubelet[3651]: E1216 12:47:04.846600 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s9vv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77b4c77bdf-sh2bd_calico-apiserver(31923b4e-d7e7-4360-b824-f299f181acf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:04.848511 kubelet[3651]: E1216 12:47:04.847747 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:47:04.848717 containerd[2018]: time="2025-12-16T12:47:04.847126967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:05.101596 containerd[2018]: time="2025-12-16T12:47:05.101441919Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:05.105136 containerd[2018]: time="2025-12-16T12:47:05.105066291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:05.105333 containerd[2018]: time="2025-12-16T12:47:05.105165126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:05.106121 kubelet[3651]: E1216 12:47:05.106071 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:05.107316 kubelet[3651]: E1216 12:47:05.106129 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:05.107316 kubelet[3651]: E1216 12:47:05.106251 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9zgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77b4c77bdf-4jfxv_calico-apiserver(e19615e9-eae1-4066-8da3-a07943f9e95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:05.107640 kubelet[3651]: E1216 12:47:05.107603 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:47:06.568520 containerd[2018]: time="2025-12-16T12:47:06.568461057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:47:06.808515 containerd[2018]: time="2025-12-16T12:47:06.808456108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:06.811149 containerd[2018]: time="2025-12-16T12:47:06.811103667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:47:06.811437 containerd[2018]: time="2025-12-16T12:47:06.811197390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:06.811513 kubelet[3651]: E1216 12:47:06.811369 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:47:06.811513 kubelet[3651]: E1216 12:47:06.811420 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:47:06.812265 kubelet[3651]: E1216 12:47:06.812023 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp5p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:06.812664 containerd[2018]: time="2025-12-16T12:47:06.812629041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:47:07.133710 containerd[2018]: time="2025-12-16T12:47:07.133495669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:07.136660 containerd[2018]: time="2025-12-16T12:47:07.136528815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:47:07.136660 containerd[2018]: time="2025-12-16T12:47:07.136554256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:07.136840 kubelet[3651]: E1216 12:47:07.136784 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:07.136840 kubelet[3651]: E1216 12:47:07.136839 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:07.137345 containerd[2018]: time="2025-12-16T12:47:07.137322374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:47:07.137530 kubelet[3651]: E1216 12:47:07.137424 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2g2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6ddd8d65c6-nlxv9_calico-system(64ea5eda-7471-4a46-a060-d20c3a27b031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:07.138739 kubelet[3651]: E1216 12:47:07.138702 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:47:07.387558 containerd[2018]: time="2025-12-16T12:47:07.387420447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:07.390994 containerd[2018]: time="2025-12-16T12:47:07.390935727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:47:07.391199 containerd[2018]: time="2025-12-16T12:47:07.390979472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:07.391861 kubelet[3651]: E1216 12:47:07.391341 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:47:07.391861 kubelet[3651]: E1216 12:47:07.391393 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:47:07.391861 kubelet[3651]: E1216 12:47:07.391508 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp5p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:07.394070 kubelet[3651]: E1216 12:47:07.393965 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:47:11.568961 containerd[2018]: time="2025-12-16T12:47:11.568685141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:47:11.859650 containerd[2018]: time="2025-12-16T12:47:11.859057737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:11.861766 containerd[2018]: time="2025-12-16T12:47:11.861641421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:47:11.861766 containerd[2018]: time="2025-12-16T12:47:11.861685759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:11.862056 kubelet[3651]: E1216 12:47:11.862006 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:47:11.862326 kubelet[3651]: E1216 12:47:11.862065 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:47:11.862326 kubelet[3651]: E1216 12:47:11.862185 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjvdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-llq4t_calico-system(828880ea-211a-4230-af15-b5fa7bcbc734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:11.863680 kubelet[3651]: E1216 12:47:11.863637 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:47:12.569952 containerd[2018]: time="2025-12-16T12:47:12.568428238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:12.825544 containerd[2018]: time="2025-12-16T12:47:12.825247822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:12.827845 containerd[2018]: time="2025-12-16T12:47:12.827726360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:12.827845 containerd[2018]: time="2025-12-16T12:47:12.827782721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:12.828245 kubelet[3651]: E1216 12:47:12.828197 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:12.828315 kubelet[3651]: E1216 12:47:12.828260 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:12.828437 kubelet[3651]: E1216 12:47:12.828384 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjccj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb597c7bd-fs82z_calico-apiserver(f12e3f4c-f803-496f-aa7c-d8e02fdb59ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:12.829866 kubelet[3651]: E1216 12:47:12.829826 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:47:13.570970 kubelet[3651]: E1216 12:47:13.570921 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:47:18.567620 kubelet[3651]: E1216 12:47:18.567324 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:47:19.569491 kubelet[3651]: E1216 12:47:19.568796 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:47:19.570915 kubelet[3651]: E1216 12:47:19.570198 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:47:19.573351 kubelet[3651]: E1216 12:47:19.573317 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:47:23.569585 kubelet[3651]: E1216 12:47:23.569330 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:47:24.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.38:22-10.200.16.10:41444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:24.017496 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:41444.service - OpenSSH per-connection server daemon (10.200.16.10:41444). Dec 16 12:47:24.021494 kernel: kauditd_printk_skb: 194 callbacks suppressed Dec 16 12:47:24.021626 kernel: audit: type=1130 audit(1765889244.016:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.38:22-10.200.16.10:41444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:24.470000 audit[5921]: USER_ACCT pid=5921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.471791 sshd[5921]: Accepted publickey for core from 10.200.16.10 port 41444 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:24.490375 kernel: audit: type=1101 audit(1765889244.470:756): pid=5921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.490000 audit[5921]: CRED_ACQ pid=5921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.493023 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:24.520433 kernel: audit: type=1103 audit(1765889244.490:757): pid=5921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.520592 kernel: audit: type=1006 audit(1765889244.490:758): pid=5921 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 12:47:24.490000 audit[5921]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc0ccd620 a2=3 a3=0 items=0 ppid=1 pid=5921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:24.539304 kernel: audit: type=1300 audit(1765889244.490:758): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc0ccd620 a2=3 a3=0 items=0 ppid=1 pid=5921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:24.490000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:24.545554 systemd-logind[1983]: New session 10 of user core. Dec 16 12:47:24.546490 kernel: audit: type=1327 audit(1765889244.490:758): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:24.550117 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:47:24.554000 audit[5921]: USER_START pid=5921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.566514 kubelet[3651]: E1216 12:47:24.566436 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:47:24.556000 audit[5924]: CRED_ACQ pid=5924 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.590114 kernel: audit: type=1105 audit(1765889244.554:759): pid=5921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.590254 kernel: audit: type=1103 audit(1765889244.556:760): pid=5924 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.790412 sshd[5924]: Connection closed by 10.200.16.10 port 41444 Dec 16 12:47:24.791185 sshd-session[5921]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:24.792000 audit[5921]: USER_END pid=5921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.813557 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:41444.service: Deactivated successfully. Dec 16 12:47:24.792000 audit[5921]: CRED_DISP pid=5921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.830132 kernel: audit: type=1106 audit(1765889244.792:761): pid=5921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.830233 kernel: audit: type=1104 audit(1765889244.792:762): pid=5921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:24.818843 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:47:24.830362 systemd-logind[1983]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:47:24.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.38:22-10.200.16.10:41444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:24.831786 systemd-logind[1983]: Removed session 10. Dec 16 12:47:25.572009 kubelet[3651]: E1216 12:47:25.571945 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:47:29.570492 kubelet[3651]: E1216 12:47:29.570176 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:47:29.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.38:22-10.200.16.10:41450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:29.883137 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:41450.service - OpenSSH per-connection server daemon (10.200.16.10:41450). Dec 16 12:47:29.903215 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:47:29.903351 kernel: audit: type=1130 audit(1765889249.882:764): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.38:22-10.200.16.10:41450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:30.327000 audit[5941]: USER_ACCT pid=5941 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.347764 sshd[5941]: Accepted publickey for core from 10.200.16.10 port 41450 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:30.347642 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:30.346000 audit[5941]: CRED_ACQ pid=5941 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.364882 kernel: audit: type=1101 audit(1765889250.327:765): pid=5941 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.364987 kernel: audit: type=1103 audit(1765889250.346:766): pid=5941 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.377153 kernel: audit: type=1006 audit(1765889250.346:767): pid=5941 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 12:47:30.346000 audit[5941]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd76dec40 a2=3 a3=0 items=0 ppid=1 pid=5941 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:30.395034 kernel: audit: type=1300 audit(1765889250.346:767): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd76dec40 a2=3 a3=0 items=0 ppid=1 pid=5941 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:30.346000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:30.402361 kernel: audit: type=1327 audit(1765889250.346:767): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:30.405684 systemd-logind[1983]: New session 11 of user core. Dec 16 12:47:30.410103 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:47:30.412000 audit[5941]: USER_START pid=5941 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.432000 audit[5944]: CRED_ACQ pid=5944 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.448259 kernel: audit: type=1105 audit(1765889250.412:768): pid=5941 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.448422 kernel: audit: type=1103 audit(1765889250.432:769): pid=5944 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.639007 sshd[5944]: Connection closed by 10.200.16.10 port 41450 Dec 16 12:47:30.639546 sshd-session[5941]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:30.639000 audit[5941]: USER_END pid=5941 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.662428 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:41450.service: Deactivated successfully. Dec 16 12:47:30.640000 audit[5941]: CRED_DISP pid=5941 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.665385 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:47:30.680027 kernel: audit: type=1106 audit(1765889250.639:770): pid=5941 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.680172 kernel: audit: type=1104 audit(1765889250.640:771): pid=5941 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:30.682660 systemd-logind[1983]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:47:30.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.38:22-10.200.16.10:41450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:30.685063 systemd-logind[1983]: Removed session 11. Dec 16 12:47:31.568144 kubelet[3651]: E1216 12:47:31.568097 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:47:34.568575 kubelet[3651]: E1216 12:47:34.568022 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:47:34.569890 kubelet[3651]: E1216 12:47:34.569710 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:47:35.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.38:22-10.200.16.10:44672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:35.723451 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:44672.service - OpenSSH per-connection server daemon (10.200.16.10:44672). Dec 16 12:47:35.726496 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:47:35.726677 kernel: audit: type=1130 audit(1765889255.722:773): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.38:22-10.200.16.10:44672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:36.140000 audit[5957]: USER_ACCT pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.159178 sshd[5957]: Accepted publickey for core from 10.200.16.10 port 44672 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:36.160990 sshd-session[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:36.159000 audit[5957]: CRED_ACQ pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.176984 kernel: audit: type=1101 audit(1765889256.140:774): pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.177126 kernel: audit: type=1103 audit(1765889256.159:775): pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.190611 kernel: audit: type=1006 audit(1765889256.159:776): pid=5957 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 12:47:36.195430 systemd-logind[1983]: New session 12 of user core. Dec 16 12:47:36.198426 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:47:36.159000 audit[5957]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc6d9300 a2=3 a3=0 items=0 ppid=1 pid=5957 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:36.221165 kernel: audit: type=1300 audit(1765889256.159:776): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc6d9300 a2=3 a3=0 items=0 ppid=1 pid=5957 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:36.159000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:36.229543 kernel: audit: type=1327 audit(1765889256.159:776): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:36.220000 audit[5957]: USER_START pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.255439 kernel: audit: type=1105 audit(1765889256.220:777): pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.230000 audit[5960]: CRED_ACQ pid=5960 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.274886 kernel: audit: type=1103 audit(1765889256.230:778): pid=5960 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.454345 sshd[5960]: Connection closed by 10.200.16.10 port 44672 Dec 16 12:47:36.455592 sshd-session[5957]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:36.455000 audit[5957]: USER_END pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.456000 audit[5957]: CRED_DISP pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.479006 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:44672.service: Deactivated successfully. Dec 16 12:47:36.485640 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:47:36.493492 kernel: audit: type=1106 audit(1765889256.455:779): pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.493615 kernel: audit: type=1104 audit(1765889256.456:780): pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.494711 systemd-logind[1983]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:47:36.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.38:22-10.200.16.10:44672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:36.496737 systemd-logind[1983]: Removed session 12. Dec 16 12:47:36.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.38:22-10.200.16.10:44686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:36.544159 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:44686.service - OpenSSH per-connection server daemon (10.200.16.10:44686). Dec 16 12:47:36.568291 kubelet[3651]: E1216 12:47:36.568247 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:47:36.977000 audit[5973]: USER_ACCT pid=5973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.978900 sshd[5973]: Accepted publickey for core from 10.200.16.10 port 44686 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:36.978000 audit[5973]: CRED_ACQ pid=5973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.978000 audit[5973]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4c2e0d0 a2=3 a3=0 items=0 ppid=1 pid=5973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:36.978000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:36.997509 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:37.004319 systemd-logind[1983]: New session 13 of user core. Dec 16 12:47:37.010293 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:47:37.015000 audit[5973]: USER_START pid=5973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:37.017000 audit[5976]: CRED_ACQ pid=5976 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:37.314197 sshd[5976]: Connection closed by 10.200.16.10 port 44686 Dec 16 12:47:37.314586 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:37.315000 audit[5973]: USER_END pid=5973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:37.315000 audit[5973]: CRED_DISP pid=5973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:37.320267 systemd-logind[1983]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:47:37.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.38:22-10.200.16.10:44686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:37.320472 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:44686.service: Deactivated successfully. Dec 16 12:47:37.323680 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:47:37.324535 systemd-logind[1983]: Removed session 13. Dec 16 12:47:37.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.38:22-10.200.16.10:44690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:37.397715 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:44690.service - OpenSSH per-connection server daemon (10.200.16.10:44690). Dec 16 12:47:37.568681 kubelet[3651]: E1216 12:47:37.568287 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:47:37.800000 audit[5992]: USER_ACCT pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:37.802160 sshd[5992]: Accepted publickey for core from 10.200.16.10 port 44690 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:37.801000 audit[5992]: CRED_ACQ pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:37.803866 sshd-session[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:37.801000 audit[5992]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb5d6670 a2=3 a3=0 items=0 ppid=1 pid=5992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:37.801000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:37.810012 systemd-logind[1983]: New session 14 of user core. Dec 16 12:47:37.816101 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:47:37.818000 audit[5992]: USER_START pid=5992 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:37.820000 audit[5995]: CRED_ACQ pid=5995 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:38.089581 sshd[5995]: Connection closed by 10.200.16.10 port 44690 Dec 16 12:47:38.090469 sshd-session[5992]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:38.092000 audit[5992]: USER_END pid=5992 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:38.092000 audit[5992]: CRED_DISP pid=5992 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:38.097045 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:44690.service: Deactivated successfully. Dec 16 12:47:38.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.38:22-10.200.16.10:44690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:38.100849 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:47:38.105191 systemd-logind[1983]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:47:38.107778 systemd-logind[1983]: Removed session 14. Dec 16 12:47:38.567517 kubelet[3651]: E1216 12:47:38.567393 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:47:42.568266 kubelet[3651]: E1216 12:47:42.568201 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:47:43.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.38:22-10.200.16.10:49366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:43.180140 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:49366.service - OpenSSH per-connection server daemon (10.200.16.10:49366). Dec 16 12:47:43.184096 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:47:43.184208 kernel: audit: type=1130 audit(1765889263.179:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.38:22-10.200.16.10:49366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:43.621000 audit[6014]: USER_ACCT pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.639172 sshd[6014]: Accepted publickey for core from 10.200.16.10 port 49366 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:43.640484 sshd-session[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:43.639000 audit[6014]: CRED_ACQ pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.655906 kernel: audit: type=1101 audit(1765889263.621:801): pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.656064 kernel: audit: type=1103 audit(1765889263.639:802): pid=6014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.665881 kernel: audit: type=1006 audit(1765889263.639:803): pid=6014 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:47:43.639000 audit[6014]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1606d60 a2=3 a3=0 items=0 ppid=1 pid=6014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:43.684669 kernel: audit: type=1300 audit(1765889263.639:803): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1606d60 a2=3 a3=0 items=0 ppid=1 pid=6014 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:43.639000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:43.688814 systemd-logind[1983]: New session 15 of user core. Dec 16 12:47:43.693114 kernel: audit: type=1327 audit(1765889263.639:803): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:43.698165 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:47:43.700000 audit[6014]: USER_START pid=6014 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.719000 audit[6017]: CRED_ACQ pid=6017 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.738299 kernel: audit: type=1105 audit(1765889263.700:804): pid=6014 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.738453 kernel: audit: type=1103 audit(1765889263.719:805): pid=6017 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.933023 sshd[6017]: Connection closed by 10.200.16.10 port 49366 Dec 16 12:47:43.933607 sshd-session[6014]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:43.933000 audit[6014]: USER_END pid=6014 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.954051 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:49366.service: Deactivated successfully. Dec 16 12:47:43.934000 audit[6014]: CRED_DISP pid=6014 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.957026 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:47:43.968841 kernel: audit: type=1106 audit(1765889263.933:806): pid=6014 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.968919 kernel: audit: type=1104 audit(1765889263.934:807): pid=6014 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:43.969039 systemd-logind[1983]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:47:43.970341 systemd-logind[1983]: Removed session 15. Dec 16 12:47:43.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.38:22-10.200.16.10:49366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:44.567312 kubelet[3651]: E1216 12:47:44.567162 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:47:45.568860 kubelet[3651]: E1216 12:47:45.568197 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:47:47.573072 containerd[2018]: time="2025-12-16T12:47:47.572955486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:47.889104 containerd[2018]: time="2025-12-16T12:47:47.888961755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:47.891539 containerd[2018]: time="2025-12-16T12:47:47.891468829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:47.891695 containerd[2018]: time="2025-12-16T12:47:47.891582864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:47.891968 kubelet[3651]: E1216 12:47:47.891910 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:47.892296 kubelet[3651]: E1216 12:47:47.891974 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:47.893944 kubelet[3651]: E1216 12:47:47.892316 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9zgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77b4c77bdf-4jfxv_calico-apiserver(e19615e9-eae1-4066-8da3-a07943f9e95e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:47.893944 kubelet[3651]: E1216 12:47:47.893862 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:47:47.894116 containerd[2018]: time="2025-12-16T12:47:47.892855397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:47:48.145418 containerd[2018]: time="2025-12-16T12:47:48.144995917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:48.147905 containerd[2018]: time="2025-12-16T12:47:48.147813088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:47:48.148015 containerd[2018]: time="2025-12-16T12:47:48.147937836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:48.148179 kubelet[3651]: E1216 12:47:48.148135 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:48.148252 kubelet[3651]: E1216 12:47:48.148191 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:48.148360 kubelet[3651]: E1216 12:47:48.148323 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a79f5c30704f47cba759fbc41fd0b2a3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-454zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ddcd6787d-54cfx_calico-system(12f0bf61-64a1-4c2f-bbd3-977cfb8492eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:48.150680 containerd[2018]: time="2025-12-16T12:47:48.150632307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:47:48.388378 containerd[2018]: time="2025-12-16T12:47:48.388040714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:48.395925 containerd[2018]: time="2025-12-16T12:47:48.395745893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:47:48.395925 containerd[2018]: time="2025-12-16T12:47:48.395843704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:48.396545 kubelet[3651]: E1216 12:47:48.396490 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:48.396649 kubelet[3651]: E1216 12:47:48.396559 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:48.397011 kubelet[3651]: E1216 12:47:48.396708 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-454zl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ddcd6787d-54cfx_calico-system(12f0bf61-64a1-4c2f-bbd3-977cfb8492eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:48.398060 kubelet[3651]: E1216 12:47:48.397845 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:47:49.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.38:22-10.200.16.10:49372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:49.023839 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:49372.service - OpenSSH per-connection server daemon (10.200.16.10:49372). Dec 16 12:47:49.027948 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:47:49.028056 kernel: audit: type=1130 audit(1765889269.023:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.38:22-10.200.16.10:49372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:49.482000 audit[6031]: USER_ACCT pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.486140 sshd[6031]: Accepted publickey for core from 10.200.16.10 port 49372 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:49.487857 sshd-session[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:49.486000 audit[6031]: CRED_ACQ pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.517152 kernel: audit: type=1101 audit(1765889269.482:810): pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.517301 kernel: audit: type=1103 audit(1765889269.486:811): pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.527823 kernel: audit: type=1006 audit(1765889269.486:812): pid=6031 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:47:49.486000 audit[6031]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6df6430 a2=3 a3=0 items=0 ppid=1 pid=6031 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:49.552062 kernel: audit: type=1300 audit(1765889269.486:812): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6df6430 a2=3 a3=0 items=0 ppid=1 pid=6031 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:49.556251 systemd-logind[1983]: New session 16 of user core. Dec 16 12:47:49.486000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:49.563861 kernel: audit: type=1327 audit(1765889269.486:812): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:49.565172 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:47:49.569000 audit[6031]: USER_START pid=6031 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.592909 kernel: audit: type=1105 audit(1765889269.569:813): pid=6031 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.592000 audit[6034]: CRED_ACQ pid=6034 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.610957 kernel: audit: type=1103 audit(1765889269.592:814): pid=6034 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.833939 sshd[6034]: Connection closed by 10.200.16.10 port 49372 Dec 16 12:47:49.834312 sshd-session[6031]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:49.838000 audit[6031]: USER_END pid=6031 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.862681 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:49372.service: Deactivated successfully. Dec 16 12:47:49.864715 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:47:49.838000 audit[6031]: CRED_DISP pid=6031 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.884202 kernel: audit: type=1106 audit(1765889269.838:815): pid=6031 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.884383 kernel: audit: type=1104 audit(1765889269.838:816): pid=6031 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.887535 systemd-logind[1983]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:47:49.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.38:22-10.200.16.10:49372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:49.893320 systemd-logind[1983]: Removed session 16. Dec 16 12:47:50.567232 kubelet[3651]: E1216 12:47:50.566949 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:47:51.567450 kubelet[3651]: E1216 12:47:51.567356 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:47:54.935398 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:51820.service - OpenSSH per-connection server daemon (10.200.16.10:51820). Dec 16 12:47:54.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.38:22-10.200.16.10:51820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:54.959303 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:47:54.959376 kernel: audit: type=1130 audit(1765889274.935:818): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.38:22-10.200.16.10:51820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:55.368667 sshd[6079]: Accepted publickey for core from 10.200.16.10 port 51820 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:55.367000 audit[6079]: USER_ACCT pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.371126 sshd-session[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:55.368000 audit[6079]: CRED_ACQ pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.406533 kernel: audit: type=1101 audit(1765889275.367:819): pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.406670 kernel: audit: type=1103 audit(1765889275.368:820): pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.412099 systemd-logind[1983]: New session 17 of user core. Dec 16 12:47:55.417168 kernel: audit: type=1006 audit(1765889275.368:821): pid=6079 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:47:55.368000 audit[6079]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd859e9e0 a2=3 a3=0 items=0 ppid=1 pid=6079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:55.434398 kernel: audit: type=1300 audit(1765889275.368:821): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd859e9e0 a2=3 a3=0 items=0 ppid=1 pid=6079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:55.368000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:55.441059 kernel: audit: type=1327 audit(1765889275.368:821): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:55.442175 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:47:55.446000 audit[6079]: USER_START pid=6079 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.467000 audit[6082]: CRED_ACQ pid=6082 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.484267 kernel: audit: type=1105 audit(1765889275.446:822): pid=6079 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.484427 kernel: audit: type=1103 audit(1765889275.467:823): pid=6082 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.711995 sshd[6082]: Connection closed by 10.200.16.10 port 51820 Dec 16 12:47:55.710671 sshd-session[6079]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:55.712000 audit[6079]: USER_END pid=6079 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.720677 systemd-logind[1983]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:47:55.721266 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:47:55.723831 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:51820.service: Deactivated successfully. Dec 16 12:47:55.729572 systemd-logind[1983]: Removed session 17. Dec 16 12:47:55.717000 audit[6079]: CRED_DISP pid=6079 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.754274 kernel: audit: type=1106 audit(1765889275.712:824): pid=6079 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.754430 kernel: audit: type=1104 audit(1765889275.717:825): pid=6079 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.38:22-10.200.16.10:51820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:55.793120 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:51822.service - OpenSSH per-connection server daemon (10.200.16.10:51822). Dec 16 12:47:55.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.38:22-10.200.16.10:51822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:56.198000 audit[6094]: USER_ACCT pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:56.199809 sshd[6094]: Accepted publickey for core from 10.200.16.10 port 51822 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:56.199000 audit[6094]: CRED_ACQ pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:56.199000 audit[6094]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff738fe80 a2=3 a3=0 items=0 ppid=1 pid=6094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:56.199000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:56.201136 sshd-session[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:56.205780 systemd-logind[1983]: New session 18 of user core. Dec 16 12:47:56.212260 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:47:56.215000 audit[6094]: USER_START pid=6094 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:56.217000 audit[6097]: CRED_ACQ pid=6097 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:56.566635 containerd[2018]: time="2025-12-16T12:47:56.566532447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:47:56.579569 sshd[6097]: Connection closed by 10.200.16.10 port 51822 Dec 16 12:47:56.581090 sshd-session[6094]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:56.581000 audit[6094]: USER_END pid=6094 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:56.581000 audit[6094]: CRED_DISP pid=6094 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:56.584646 systemd-logind[1983]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:47:56.585651 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:51822.service: Deactivated successfully. Dec 16 12:47:56.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.38:22-10.200.16.10:51822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:56.590542 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:47:56.594634 systemd-logind[1983]: Removed session 18. Dec 16 12:47:56.671453 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:51832.service - OpenSSH per-connection server daemon (10.200.16.10:51832). Dec 16 12:47:56.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.38:22-10.200.16.10:51832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:56.813499 containerd[2018]: time="2025-12-16T12:47:56.813437670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:56.816048 containerd[2018]: time="2025-12-16T12:47:56.815952207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:47:56.816048 containerd[2018]: time="2025-12-16T12:47:56.815999912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:56.816411 kubelet[3651]: E1216 12:47:56.816333 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:47:56.816867 kubelet[3651]: E1216 12:47:56.816422 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:47:56.816867 kubelet[3651]: E1216 12:47:56.816542 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp5p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:56.823413 containerd[2018]: time="2025-12-16T12:47:56.823356221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:47:57.090751 containerd[2018]: time="2025-12-16T12:47:57.090342369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:57.091000 audit[6107]: USER_ACCT pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:57.094217 sshd[6107]: Accepted publickey for core from 10.200.16.10 port 51832 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:57.094000 audit[6107]: CRED_ACQ pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:57.094000 audit[6107]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcdbc95f0 a2=3 a3=0 items=0 ppid=1 pid=6107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:57.094000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:57.095310 sshd-session[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:57.096023 containerd[2018]: time="2025-12-16T12:47:57.094530917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:57.096023 containerd[2018]: time="2025-12-16T12:47:57.094600783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:47:57.096682 kubelet[3651]: E1216 12:47:57.096219 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:47:57.096682 kubelet[3651]: E1216 12:47:57.096275 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:47:57.096682 kubelet[3651]: E1216 12:47:57.096380 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp5p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mj8nq_calico-system(180cd658-ddf7-4444-81e2-acfbf19611d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:57.097733 kubelet[3651]: E1216 12:47:57.097673 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:47:57.101596 systemd-logind[1983]: New session 19 of user core. Dec 16 12:47:57.107454 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:47:57.111000 audit[6107]: USER_START pid=6107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:57.113000 audit[6111]: CRED_ACQ pid=6111 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:57.570262 containerd[2018]: time="2025-12-16T12:47:57.568303840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:47:57.831100 containerd[2018]: time="2025-12-16T12:47:57.830293659Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:57.834325 containerd[2018]: time="2025-12-16T12:47:57.834165222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:47:57.834325 containerd[2018]: time="2025-12-16T12:47:57.834278945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:57.835172 kubelet[3651]: E1216 12:47:57.835116 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:57.835549 kubelet[3651]: E1216 12:47:57.835180 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:57.835549 kubelet[3651]: E1216 12:47:57.835308 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2g2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6ddd8d65c6-nlxv9_calico-system(64ea5eda-7471-4a46-a060-d20c3a27b031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:57.836916 kubelet[3651]: E1216 12:47:57.836865 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:47:57.990000 audit[6123]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=6123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:47:57.990000 audit[6123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd43d2550 a2=0 a3=1 items=0 ppid=3809 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:57.990000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:47:57.996000 audit[6123]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=6123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:47:57.996000 audit[6123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd43d2550 a2=0 a3=1 items=0 ppid=3809 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:57.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:47:58.015000 audit[6125]: NETFILTER_CFG table=filter:140 family=2 entries=38 op=nft_register_rule pid=6125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:47:58.015000 audit[6125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffebc1b180 a2=0 a3=1 items=0 ppid=3809 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:58.015000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:47:58.023000 audit[6125]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=6125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:47:58.023000 audit[6125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffebc1b180 a2=0 a3=1 items=0 ppid=3809 pid=6125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:58.023000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:47:58.058302 sshd[6111]: Connection closed by 10.200.16.10 port 51832 Dec 16 12:47:58.059096 sshd-session[6107]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:58.060000 audit[6107]: USER_END pid=6107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:58.060000 audit[6107]: CRED_DISP pid=6107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:58.065157 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:51832.service: Deactivated successfully. Dec 16 12:47:58.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.38:22-10.200.16.10:51832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:58.068134 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:47:58.071205 systemd-logind[1983]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:47:58.073501 systemd-logind[1983]: Removed session 19. Dec 16 12:47:58.151009 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:51838.service - OpenSSH per-connection server daemon (10.200.16.10:51838). Dec 16 12:47:58.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.38:22-10.200.16.10:51838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:58.568726 kubelet[3651]: E1216 12:47:58.568680 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:47:58.578000 audit[6130]: USER_ACCT pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:58.579215 sshd[6130]: Accepted publickey for core from 10.200.16.10 port 51838 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:58.580000 audit[6130]: CRED_ACQ pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:58.580000 audit[6130]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff75129d0 a2=3 a3=0 items=0 ppid=1 pid=6130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:58.580000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:58.582667 sshd-session[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:58.591155 systemd-logind[1983]: New session 20 of user core. Dec 16 12:47:58.596316 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:47:58.598000 audit[6130]: USER_START pid=6130 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:58.601000 audit[6133]: CRED_ACQ pid=6133 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:58.999548 sshd[6133]: Connection closed by 10.200.16.10 port 51838 Dec 16 12:47:58.998339 sshd-session[6130]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:59.000000 audit[6130]: USER_END pid=6130 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:59.000000 audit[6130]: CRED_DISP pid=6130 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:59.003478 systemd-logind[1983]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:47:59.004065 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:51838.service: Deactivated successfully. Dec 16 12:47:59.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.38:22-10.200.16.10:51838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:59.010461 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:47:59.013526 systemd-logind[1983]: Removed session 20. Dec 16 12:47:59.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.38:22-10.200.16.10:51850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:59.090330 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:51850.service - OpenSSH per-connection server daemon (10.200.16.10:51850). Dec 16 12:47:59.536000 audit[6143]: USER_ACCT pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:59.537835 sshd[6143]: Accepted publickey for core from 10.200.16.10 port 51850 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:59.537000 audit[6143]: CRED_ACQ pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:59.537000 audit[6143]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3855ea0 a2=3 a3=0 items=0 ppid=1 pid=6143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:59.537000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:59.539078 sshd-session[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:59.543401 systemd-logind[1983]: New session 21 of user core. Dec 16 12:47:59.548064 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:47:59.549000 audit[6143]: USER_START pid=6143 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:59.551000 audit[6146]: CRED_ACQ pid=6146 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:59.568801 containerd[2018]: time="2025-12-16T12:47:59.568349231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:59.810353 containerd[2018]: time="2025-12-16T12:47:59.810208876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:59.813025 containerd[2018]: time="2025-12-16T12:47:59.812956362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:59.813674 containerd[2018]: time="2025-12-16T12:47:59.813077078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:59.813755 kubelet[3651]: E1216 12:47:59.813299 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:59.813755 kubelet[3651]: E1216 12:47:59.813354 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:59.813755 kubelet[3651]: E1216 12:47:59.813531 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s9vv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77b4c77bdf-sh2bd_calico-apiserver(31923b4e-d7e7-4360-b824-f299f181acf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:59.815197 kubelet[3651]: E1216 12:47:59.814659 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:47:59.820813 sshd[6146]: Connection closed by 10.200.16.10 port 51850 Dec 16 12:47:59.821418 sshd-session[6143]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:59.823000 audit[6143]: USER_END pid=6143 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:59.823000 audit[6143]: CRED_DISP pid=6143 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:59.827751 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:51850.service: Deactivated successfully. Dec 16 12:47:59.831417 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:47:59.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.38:22-10.200.16.10:51850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:59.833281 systemd-logind[1983]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:47:59.835622 systemd-logind[1983]: Removed session 21. Dec 16 12:48:01.571115 kubelet[3651]: E1216 12:48:01.570893 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:48:02.568772 containerd[2018]: time="2025-12-16T12:48:02.568292713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:48:02.597000 audit[6181]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=6181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:02.602517 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:48:02.602661 kernel: audit: type=1325 audit(1765889282.597:867): table=filter:142 family=2 entries=26 op=nft_register_rule pid=6181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:02.597000 audit[6181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff36b2c20 a2=0 a3=1 items=0 ppid=3809 pid=6181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:02.632161 kernel: audit: type=1300 audit(1765889282.597:867): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff36b2c20 a2=0 a3=1 items=0 ppid=3809 pid=6181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:02.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:02.642590 kernel: audit: type=1327 audit(1765889282.597:867): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:02.618000 audit[6181]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=6181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:02.653413 kernel: audit: type=1325 audit(1765889282.618:868): table=nat:143 family=2 entries=104 op=nft_register_chain pid=6181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:02.618000 audit[6181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff36b2c20 a2=0 a3=1 items=0 ppid=3809 pid=6181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:02.675964 kernel: audit: type=1300 audit(1765889282.618:868): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff36b2c20 a2=0 a3=1 items=0 ppid=3809 pid=6181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:02.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:02.677931 kernel: audit: type=1327 audit(1765889282.618:868): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:02.856009 containerd[2018]: time="2025-12-16T12:48:02.855511440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:02.859578 containerd[2018]: time="2025-12-16T12:48:02.859504718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:48:02.859740 containerd[2018]: time="2025-12-16T12:48:02.859508262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:02.859971 kubelet[3651]: E1216 12:48:02.859922 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:48:02.860442 kubelet[3651]: E1216 12:48:02.859983 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:48:02.860442 kubelet[3651]: E1216 12:48:02.860109 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjccj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fb597c7bd-fs82z_calico-apiserver(f12e3f4c-f803-496f-aa7c-d8e02fdb59ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:02.861714 kubelet[3651]: E1216 12:48:02.861672 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:48:03.569782 containerd[2018]: time="2025-12-16T12:48:03.569280658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:48:03.859212 containerd[2018]: time="2025-12-16T12:48:03.858975377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:03.862751 containerd[2018]: time="2025-12-16T12:48:03.862071656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:48:03.862970 containerd[2018]: time="2025-12-16T12:48:03.862731762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:03.863047 kubelet[3651]: E1216 12:48:03.863005 3651 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:48:03.863387 kubelet[3651]: E1216 12:48:03.863062 3651 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:48:03.863387 kubelet[3651]: E1216 12:48:03.863276 3651 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjvdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-llq4t_calico-system(828880ea-211a-4230-af15-b5fa7bcbc734): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:03.864498 kubelet[3651]: E1216 12:48:03.864460 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:48:04.906546 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:56316.service - OpenSSH per-connection server daemon (10.200.16.10:56316). Dec 16 12:48:04.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.38:22-10.200.16.10:56316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:04.922946 kernel: audit: type=1130 audit(1765889284.905:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.38:22-10.200.16.10:56316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:05.316000 audit[6183]: USER_ACCT pid=6183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:05.334822 sshd[6183]: Accepted publickey for core from 10.200.16.10 port 56316 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:05.338209 sshd-session[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:05.336000 audit[6183]: CRED_ACQ pid=6183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:05.356709 kernel: audit: type=1101 audit(1765889285.316:870): pid=6183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:05.356848 kernel: audit: type=1103 audit(1765889285.336:871): pid=6183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:05.360036 systemd-logind[1983]: New session 22 of user core. Dec 16 12:48:05.366926 kernel: audit: type=1006 audit(1765889285.336:872): pid=6183 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 12:48:05.336000 audit[6183]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe59f2700 a2=3 a3=0 items=0 ppid=1 pid=6183 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:05.336000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:05.371144 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:48:05.374000 audit[6183]: USER_START pid=6183 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:05.376000 audit[6186]: CRED_ACQ pid=6186 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:05.588605 sshd[6186]: Connection closed by 10.200.16.10 port 56316 Dec 16 12:48:05.589637 sshd-session[6183]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:05.592000 audit[6183]: USER_END pid=6183 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:05.592000 audit[6183]: CRED_DISP pid=6183 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:05.596559 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:56316.service: Deactivated successfully. Dec 16 12:48:05.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.38:22-10.200.16.10:56316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:05.600519 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:48:05.602444 systemd-logind[1983]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:48:05.604921 systemd-logind[1983]: Removed session 22. Dec 16 12:48:09.570934 kubelet[3651]: E1216 12:48:09.570242 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:48:09.573796 kubelet[3651]: E1216 12:48:09.573726 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:48:10.674357 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:48:10.674489 kernel: audit: type=1130 audit(1765889290.666:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.38:22-10.200.16.10:53086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:10.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.38:22-10.200.16.10:53086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:10.667166 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:53086.service - OpenSSH per-connection server daemon (10.200.16.10:53086). Dec 16 12:48:11.084000 audit[6198]: USER_ACCT pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.103146 sshd[6198]: Accepted publickey for core from 10.200.16.10 port 53086 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:11.102000 audit[6198]: CRED_ACQ pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.122141 kernel: audit: type=1101 audit(1765889291.084:879): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.122435 kernel: audit: type=1103 audit(1765889291.102:880): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.110536 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:11.119424 systemd-logind[1983]: New session 23 of user core. Dec 16 12:48:11.133159 kernel: audit: type=1006 audit(1765889291.102:881): pid=6198 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:48:11.102000 audit[6198]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffefe1f3f0 a2=3 a3=0 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:11.152851 kernel: audit: type=1300 audit(1765889291.102:881): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffefe1f3f0 a2=3 a3=0 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:11.102000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:11.163059 kernel: audit: type=1327 audit(1765889291.102:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:11.153207 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:48:11.163000 audit[6198]: USER_START pid=6198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.184435 kernel: audit: type=1105 audit(1765889291.163:882): pid=6198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.184000 audit[6201]: CRED_ACQ pid=6201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.201996 kernel: audit: type=1103 audit(1765889291.184:883): pid=6201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.385149 sshd[6201]: Connection closed by 10.200.16.10 port 53086 Dec 16 12:48:11.386296 sshd-session[6198]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:11.387000 audit[6198]: USER_END pid=6198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.394698 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:53086.service: Deactivated successfully. Dec 16 12:48:11.397598 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:48:11.399147 systemd-logind[1983]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:48:11.400657 systemd-logind[1983]: Removed session 23. Dec 16 12:48:11.387000 audit[6198]: CRED_DISP pid=6198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.423678 kernel: audit: type=1106 audit(1765889291.387:884): pid=6198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.423835 kernel: audit: type=1104 audit(1765889291.387:885): pid=6198 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:11.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.38:22-10.200.16.10:53086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:12.566821 kubelet[3651]: E1216 12:48:12.566625 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:48:12.570255 kubelet[3651]: E1216 12:48:12.570182 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:48:15.568098 kubelet[3651]: E1216 12:48:15.568048 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:48:16.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.38:22-10.200.16.10:53096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:16.480110 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:53096.service - OpenSSH per-connection server daemon (10.200.16.10:53096). Dec 16 12:48:16.483544 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:16.483635 kernel: audit: type=1130 audit(1765889296.478:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.38:22-10.200.16.10:53096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:16.567856 kubelet[3651]: E1216 12:48:16.567726 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb" Dec 16 12:48:16.896000 audit[6212]: USER_ACCT pid=6212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.899395 sshd[6212]: Accepted publickey for core from 10.200.16.10 port 53096 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:16.919000 audit[6212]: CRED_ACQ pid=6212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.920842 sshd-session[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:16.940790 kernel: audit: type=1101 audit(1765889296.896:888): pid=6212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.940949 kernel: audit: type=1103 audit(1765889296.919:889): pid=6212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.950293 kernel: audit: type=1006 audit(1765889296.919:890): pid=6212 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:48:16.919000 audit[6212]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1f76e20 a2=3 a3=0 items=0 ppid=1 pid=6212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:16.968132 kernel: audit: type=1300 audit(1765889296.919:890): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1f76e20 a2=3 a3=0 items=0 ppid=1 pid=6212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:16.919000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:16.972217 systemd-logind[1983]: New session 24 of user core. Dec 16 12:48:16.975606 kernel: audit: type=1327 audit(1765889296.919:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:16.978101 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:48:16.980000 audit[6212]: USER_START pid=6212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.983000 audit[6216]: CRED_ACQ pid=6216 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:17.014602 kernel: audit: type=1105 audit(1765889296.980:891): pid=6212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:17.014738 kernel: audit: type=1103 audit(1765889296.983:892): pid=6216 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:17.183327 sshd[6216]: Connection closed by 10.200.16.10 port 53096 Dec 16 12:48:17.183829 sshd-session[6212]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:17.184000 audit[6212]: USER_END pid=6212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:17.187467 systemd-logind[1983]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:48:17.189684 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:53096.service: Deactivated successfully. Dec 16 12:48:17.193701 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:48:17.196467 systemd-logind[1983]: Removed session 24. Dec 16 12:48:17.184000 audit[6212]: CRED_DISP pid=6212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:17.221518 kernel: audit: type=1106 audit(1765889297.184:893): pid=6212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:17.221681 kernel: audit: type=1104 audit(1765889297.184:894): pid=6212 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:17.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.38:22-10.200.16.10:53096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:17.568159 kubelet[3651]: E1216 12:48:17.568117 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:48:22.262060 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:40170.service - OpenSSH per-connection server daemon (10.200.16.10:40170). Dec 16 12:48:22.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.38:22-10.200.16.10:40170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:22.266115 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:22.266197 kernel: audit: type=1130 audit(1765889302.261:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.38:22-10.200.16.10:40170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:22.569088 kubelet[3651]: E1216 12:48:22.568822 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mj8nq" podUID="180cd658-ddf7-4444-81e2-acfbf19611d5" Dec 16 12:48:22.650822 sshd[6255]: Accepted publickey for core from 10.200.16.10 port 40170 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:22.649000 audit[6255]: USER_ACCT pid=6255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.667000 audit[6255]: CRED_ACQ pid=6255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.686984 sshd-session[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:22.688360 kernel: audit: type=1101 audit(1765889302.649:897): pid=6255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.688458 kernel: audit: type=1103 audit(1765889302.667:898): pid=6255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.696997 systemd-logind[1983]: New session 25 of user core. Dec 16 12:48:22.702889 kernel: audit: type=1006 audit(1765889302.667:899): pid=6255 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 12:48:22.667000 audit[6255]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4fce110 a2=3 a3=0 items=0 ppid=1 pid=6255 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:22.707143 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:48:22.725187 kernel: audit: type=1300 audit(1765889302.667:899): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4fce110 a2=3 a3=0 items=0 ppid=1 pid=6255 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:22.667000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:22.741294 kernel: audit: type=1327 audit(1765889302.667:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:22.728000 audit[6255]: USER_START pid=6255 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.765740 kernel: audit: type=1105 audit(1765889302.728:900): pid=6255 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.732000 audit[6258]: CRED_ACQ pid=6258 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.787519 kernel: audit: type=1103 audit(1765889302.732:901): pid=6258 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.942891 sshd[6258]: Connection closed by 10.200.16.10 port 40170 Dec 16 12:48:22.944070 sshd-session[6255]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:22.945000 audit[6255]: USER_END pid=6255 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.949165 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:40170.service: Deactivated successfully. Dec 16 12:48:22.954665 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:48:22.956384 systemd-logind[1983]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:48:22.958188 systemd-logind[1983]: Removed session 25. Dec 16 12:48:22.945000 audit[6255]: CRED_DISP pid=6255 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.987927 kernel: audit: type=1106 audit(1765889302.945:902): pid=6255 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.988089 kernel: audit: type=1104 audit(1765889302.945:903): pid=6255 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.38:22-10.200.16.10:40170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:23.569613 kubelet[3651]: E1216 12:48:23.569567 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-sh2bd" podUID="31923b4e-d7e7-4360-b824-f299f181acf0" Dec 16 12:48:24.566960 kubelet[3651]: E1216 12:48:24.566910 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ddd8d65c6-nlxv9" podUID="64ea5eda-7471-4a46-a060-d20c3a27b031" Dec 16 12:48:26.566206 kubelet[3651]: E1216 12:48:26.566156 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77b4c77bdf-4jfxv" podUID="e19615e9-eae1-4066-8da3-a07943f9e95e" Dec 16 12:48:28.041207 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.16.10:40178.service - OpenSSH per-connection server daemon (10.200.16.10:40178). Dec 16 12:48:28.060401 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:28.060452 kernel: audit: type=1130 audit(1765889308.040:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.38:22-10.200.16.10:40178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:28.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.38:22-10.200.16.10:40178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:28.459000 audit[6271]: USER_ACCT pid=6271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.460993 sshd[6271]: Accepted publickey for core from 10.200.16.10 port 40178 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:28.486895 kernel: audit: type=1101 audit(1765889308.459:906): pid=6271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.487006 kernel: audit: type=1103 audit(1765889308.478:907): pid=6271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.478000 audit[6271]: CRED_ACQ pid=6271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.479812 sshd-session[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:28.507782 kernel: audit: type=1006 audit(1765889308.478:908): pid=6271 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 12:48:28.478000 audit[6271]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9bfd5e0 a2=3 a3=0 items=0 ppid=1 pid=6271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:28.513111 systemd-logind[1983]: New session 26 of user core. Dec 16 12:48:28.528191 kernel: audit: type=1300 audit(1765889308.478:908): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9bfd5e0 a2=3 a3=0 items=0 ppid=1 pid=6271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:28.478000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:28.535945 kernel: audit: type=1327 audit(1765889308.478:908): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:28.537144 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:48:28.539000 audit[6271]: USER_START pid=6271 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.559000 audit[6274]: CRED_ACQ pid=6274 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.576658 kernel: audit: type=1105 audit(1765889308.539:909): pid=6271 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.576799 kernel: audit: type=1103 audit(1765889308.559:910): pid=6274 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.579950 kubelet[3651]: E1216 12:48:28.579378 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-llq4t" podUID="828880ea-211a-4230-af15-b5fa7bcbc734" Dec 16 12:48:28.761811 sshd[6274]: Connection closed by 10.200.16.10 port 40178 Dec 16 12:48:28.780624 sshd-session[6271]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:28.781000 audit[6271]: USER_END pid=6271 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.785761 systemd[1]: sshd@23-10.200.20.38:22-10.200.16.10:40178.service: Deactivated successfully. Dec 16 12:48:28.790901 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:48:28.781000 audit[6271]: CRED_DISP pid=6271 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.803555 systemd-logind[1983]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:48:28.806999 systemd-logind[1983]: Removed session 26. Dec 16 12:48:28.818750 kernel: audit: type=1106 audit(1765889308.781:911): pid=6271 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.818925 kernel: audit: type=1104 audit(1765889308.781:912): pid=6271 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.38:22-10.200.16.10:40178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:29.570012 kubelet[3651]: E1216 12:48:29.569603 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fb597c7bd-fs82z" podUID="f12e3f4c-f803-496f-aa7c-d8e02fdb59ff" Dec 16 12:48:30.568792 kubelet[3651]: E1216 12:48:30.568725 3651 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ddcd6787d-54cfx" podUID="12f0bf61-64a1-4c2f-bbd3-977cfb8492eb"