Dec 12 17:24:29.771979 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 12 17:24:29.771998 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:24:29.772004 kernel: KASLR enabled Dec 12 17:24:29.772008 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 12 17:24:29.772013 kernel: printk: legacy bootconsole [pl11] enabled Dec 12 17:24:29.772017 kernel: efi: EFI v2.7 by EDK II Dec 12 17:24:29.772023 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89c018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 12 17:24:29.772027 kernel: random: crng init done Dec 12 17:24:29.772031 kernel: secureboot: Secure boot disabled Dec 12 17:24:29.772035 kernel: ACPI: Early table checksum verification disabled Dec 12 17:24:29.772040 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 12 17:24:29.772044 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:24:29.772048 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:24:29.772053 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 12 17:24:29.772059 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:24:29.772063 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:24:29.772068 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:24:29.772074 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:24:29.772078 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:24:29.772082 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:24:29.772087 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 12 17:24:29.772092 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:24:29.772096 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 12 17:24:29.772101 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:24:29.772105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 12 17:24:29.772110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 12 17:24:29.772114 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 12 17:24:29.772120 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 12 17:24:29.772124 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 12 17:24:29.772129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 12 17:24:29.772133 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 12 17:24:29.772138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 12 17:24:29.772142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 12 17:24:29.772147 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 12 17:24:29.772151 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 12 17:24:29.772156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 12 17:24:29.772160 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 12 17:24:29.772165 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 12 17:24:29.772170 kernel: Zone ranges: Dec 12 17:24:29.772175 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 12 17:24:29.772181 kernel: DMA32 empty Dec 12 17:24:29.772186 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 12 17:24:29.772191 kernel: Device empty Dec 12 17:24:29.772196 kernel: Movable zone start for each node Dec 12 17:24:29.772201 kernel: Early memory node ranges Dec 12 17:24:29.772206 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 12 17:24:29.772210 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 12 17:24:29.772215 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 12 17:24:29.772220 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 12 17:24:29.772224 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 12 17:24:29.772229 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 12 17:24:29.772234 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 12 17:24:29.772239 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 12 17:24:29.772244 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 12 17:24:29.772249 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 12 17:24:29.772253 kernel: psci: probing for conduit method from ACPI. Dec 12 17:24:29.772258 kernel: psci: PSCIv1.3 detected in firmware. Dec 12 17:24:29.772263 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:24:29.772267 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 12 17:24:29.772272 kernel: psci: SMC Calling Convention v1.4 Dec 12 17:24:29.772277 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 12 17:24:29.772281 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 12 17:24:29.772286 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:24:29.772291 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:24:29.772297 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 12 17:24:29.772302 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:24:29.772307 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 12 17:24:29.772311 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:24:29.772316 kernel: CPU features: detected: Spectre-v4 Dec 12 17:24:29.772321 kernel: CPU features: detected: Spectre-BHB Dec 12 17:24:29.772325 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:24:29.772330 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:24:29.772335 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 12 17:24:29.772339 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:24:29.772345 kernel: alternatives: applying boot alternatives Dec 12 17:24:29.772351 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:24:29.772356 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:24:29.772360 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:24:29.772365 kernel: Fallback order for Node 0: 0 Dec 12 17:24:29.772370 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 12 17:24:29.772375 kernel: Policy zone: Normal Dec 12 17:24:29.772379 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:24:29.772384 kernel: software IO TLB: area num 2. Dec 12 17:24:29.772389 kernel: software IO TLB: mapped [mem 0x0000000037380000-0x000000003b380000] (64MB) Dec 12 17:24:29.772393 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 17:24:29.772399 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:24:29.772404 kernel: rcu: RCU event tracing is enabled. Dec 12 17:24:29.772409 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 17:24:29.772414 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:24:29.772418 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:24:29.772423 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:24:29.772428 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 17:24:29.772433 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:24:29.772437 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:24:29.772442 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:24:29.772447 kernel: GICv3: 960 SPIs implemented Dec 12 17:24:29.772452 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:24:29.772457 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:24:29.772462 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 12 17:24:29.772466 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 12 17:24:29.772471 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 12 17:24:29.772475 kernel: ITS: No ITS available, not enabling LPIs Dec 12 17:24:29.772480 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:24:29.772485 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 12 17:24:29.772490 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 17:24:29.772495 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 12 17:24:29.772500 kernel: Console: colour dummy device 80x25 Dec 12 17:24:29.772506 kernel: printk: legacy console [tty1] enabled Dec 12 17:24:29.772511 kernel: ACPI: Core revision 20240827 Dec 12 17:24:29.772516 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 12 17:24:29.772521 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:24:29.772526 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:24:29.772531 kernel: landlock: Up and running. Dec 12 17:24:29.772536 kernel: SELinux: Initializing. Dec 12 17:24:29.772541 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:24:29.772546 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:24:29.772551 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 12 17:24:29.772556 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 12 17:24:29.772565 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 12 17:24:29.772571 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:24:29.772576 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:24:29.772582 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:24:29.772587 kernel: Remapping and enabling EFI services. Dec 12 17:24:29.772593 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:24:29.772598 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:24:29.772603 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 12 17:24:29.772608 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 12 17:24:29.772614 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 17:24:29.772620 kernel: SMP: Total of 2 processors activated. Dec 12 17:24:29.772625 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:24:29.772630 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:24:29.772635 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 12 17:24:29.772641 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:24:29.772646 kernel: CPU features: detected: Common not Private translations Dec 12 17:24:29.772652 kernel: CPU features: detected: CRC32 instructions Dec 12 17:24:29.772657 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 12 17:24:29.772662 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:24:29.772667 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:24:29.772672 kernel: CPU features: detected: Privileged Access Never Dec 12 17:24:29.772678 kernel: CPU features: detected: Speculation barrier (SB) Dec 12 17:24:29.772683 kernel: CPU features: detected: TLB range maintenance instructions Dec 12 17:24:29.772689 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:24:29.772694 kernel: CPU features: detected: Scalable Vector Extension Dec 12 17:24:29.772699 kernel: alternatives: applying system-wide alternatives Dec 12 17:24:29.772705 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 12 17:24:29.772710 kernel: SVE: maximum available vector length 16 bytes per vector Dec 12 17:24:29.772715 kernel: SVE: default vector length 16 bytes per vector Dec 12 17:24:29.772720 kernel: Memory: 3979964K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 193008K reserved, 16384K cma-reserved) Dec 12 17:24:29.772726 kernel: devtmpfs: initialized Dec 12 17:24:29.772732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:24:29.772737 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 17:24:29.772742 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:24:29.772747 kernel: 0 pages in range for non-PLT usage Dec 12 17:24:29.772752 kernel: 515184 pages in range for PLT usage Dec 12 17:24:29.772757 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:24:29.772763 kernel: SMBIOS 3.1.0 present. Dec 12 17:24:29.772769 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 12 17:24:29.772774 kernel: DMI: Memory slots populated: 2/2 Dec 12 17:24:29.772779 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:24:29.772784 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:24:29.772789 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:24:29.772795 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:24:29.772801 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:24:29.772806 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 12 17:24:29.772811 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:24:29.772816 kernel: cpuidle: using governor menu Dec 12 17:24:29.772822 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:24:29.772827 kernel: ASID allocator initialised with 32768 entries Dec 12 17:24:29.772832 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:24:29.772837 kernel: Serial: AMBA PL011 UART driver Dec 12 17:24:29.772843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:24:29.772848 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:24:29.772869 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:24:29.772874 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:24:29.772880 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:24:29.772885 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:24:29.772890 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:24:29.772896 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:24:29.772901 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:24:29.772907 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:24:29.772912 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:24:29.772917 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:24:29.772922 kernel: ACPI: Interpreter enabled Dec 12 17:24:29.772927 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:24:29.772933 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:24:29.772938 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:24:29.772944 kernel: printk: legacy bootconsole [pl11] disabled Dec 12 17:24:29.772949 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 12 17:24:29.772954 kernel: ACPI: CPU0 has been hot-added Dec 12 17:24:29.772959 kernel: ACPI: CPU1 has been hot-added Dec 12 17:24:29.772964 kernel: iommu: Default domain type: Translated Dec 12 17:24:29.772970 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:24:29.772975 kernel: efivars: Registered efivars operations Dec 12 17:24:29.772980 kernel: vgaarb: loaded Dec 12 17:24:29.772985 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:24:29.772990 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:24:29.772996 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:24:29.773001 kernel: pnp: PnP ACPI init Dec 12 17:24:29.773007 kernel: pnp: PnP ACPI: found 0 devices Dec 12 17:24:29.773012 kernel: NET: Registered PF_INET protocol family Dec 12 17:24:29.773017 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:24:29.773022 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:24:29.773027 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:24:29.773033 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:24:29.773038 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:24:29.773044 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:24:29.773049 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:24:29.773054 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:24:29.773060 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:24:29.773065 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:24:29.773070 kernel: kvm [1]: HYP mode not available Dec 12 17:24:29.773075 kernel: Initialise system trusted keyrings Dec 12 17:24:29.773080 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:24:29.773086 kernel: Key type asymmetric registered Dec 12 17:24:29.773091 kernel: Asymmetric key parser 'x509' registered Dec 12 17:24:29.773097 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:24:29.773102 kernel: io scheduler mq-deadline registered Dec 12 17:24:29.773107 kernel: io scheduler kyber registered Dec 12 17:24:29.773112 kernel: io scheduler bfq registered Dec 12 17:24:29.773118 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:24:29.773124 kernel: thunder_xcv, ver 1.0 Dec 12 17:24:29.773129 kernel: thunder_bgx, ver 1.0 Dec 12 17:24:29.773134 kernel: nicpf, ver 1.0 Dec 12 17:24:29.773139 kernel: nicvf, ver 1.0 Dec 12 17:24:29.773283 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:24:29.773353 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:24:23 UTC (1765560263) Dec 12 17:24:29.773361 kernel: efifb: probing for efifb Dec 12 17:24:29.773367 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 12 17:24:29.773372 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 12 17:24:29.773377 kernel: efifb: scrolling: redraw Dec 12 17:24:29.773383 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 17:24:29.773388 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 17:24:29.773393 kernel: fb0: EFI VGA frame buffer device Dec 12 17:24:29.773399 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 12 17:24:29.773404 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:24:29.773409 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:24:29.773415 kernel: watchdog: NMI not fully supported Dec 12 17:24:29.773420 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:24:29.773425 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:24:29.773430 kernel: Segment Routing with IPv6 Dec 12 17:24:29.773436 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:24:29.773442 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:24:29.773447 kernel: Key type dns_resolver registered Dec 12 17:24:29.773452 kernel: registered taskstats version 1 Dec 12 17:24:29.773457 kernel: Loading compiled-in X.509 certificates Dec 12 17:24:29.773463 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:24:29.773468 kernel: Demotion targets for Node 0: null Dec 12 17:24:29.773474 kernel: Key type .fscrypt registered Dec 12 17:24:29.773479 kernel: Key type fscrypt-provisioning registered Dec 12 17:24:29.773484 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:24:29.773489 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:24:29.773494 kernel: ima: No architecture policies found Dec 12 17:24:29.773499 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:24:29.773505 kernel: clk: Disabling unused clocks Dec 12 17:24:29.773510 kernel: PM: genpd: Disabling unused power domains Dec 12 17:24:29.773516 kernel: Freeing unused kernel memory: 12416K Dec 12 17:24:29.773521 kernel: Run /init as init process Dec 12 17:24:29.773526 kernel: with arguments: Dec 12 17:24:29.773531 kernel: /init Dec 12 17:24:29.775878 kernel: with environment: Dec 12 17:24:29.775901 kernel: HOME=/ Dec 12 17:24:29.775908 kernel: TERM=linux Dec 12 17:24:29.775919 kernel: hv_vmbus: Vmbus version:5.3 Dec 12 17:24:29.775925 kernel: SCSI subsystem initialized Dec 12 17:24:29.775931 kernel: hv_vmbus: registering driver hid_hyperv Dec 12 17:24:29.775936 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 12 17:24:29.776079 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 12 17:24:29.776088 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 12 17:24:29.776095 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 12 17:24:29.776101 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 12 17:24:29.776106 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 12 17:24:29.776112 kernel: PTP clock support registered Dec 12 17:24:29.776117 kernel: hv_utils: Registering HyperV Utility Driver Dec 12 17:24:29.776122 kernel: hv_vmbus: registering driver hv_utils Dec 12 17:24:29.776127 kernel: hv_utils: Heartbeat IC version 3.0 Dec 12 17:24:29.776134 kernel: hv_utils: Shutdown IC version 3.2 Dec 12 17:24:29.776139 kernel: hv_utils: TimeSync IC version 4.0 Dec 12 17:24:29.776144 kernel: hv_vmbus: registering driver hv_storvsc Dec 12 17:24:29.776246 kernel: scsi host1: storvsc_host_t Dec 12 17:24:29.776326 kernel: scsi host0: storvsc_host_t Dec 12 17:24:29.776416 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 12 17:24:29.776500 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 12 17:24:29.776574 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 12 17:24:29.776647 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 12 17:24:29.776720 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 12 17:24:29.776792 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 12 17:24:29.777669 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 12 17:24:29.777791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 12 17:24:29.777879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 12 17:24:29.777887 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 17:24:29.777966 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 12 17:24:29.778043 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 12 17:24:29.778053 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 12 17:24:29.778125 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 12 17:24:29.778132 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:24:29.778138 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:24:29.778143 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:24:29.778149 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:24:29.778154 kernel: raid6: neonx8 gen() 18559 MB/s Dec 12 17:24:29.778161 kernel: raid6: neonx4 gen() 18575 MB/s Dec 12 17:24:29.778166 kernel: raid6: neonx2 gen() 17074 MB/s Dec 12 17:24:29.778171 kernel: raid6: neonx1 gen() 15023 MB/s Dec 12 17:24:29.778177 kernel: raid6: int64x8 gen() 10546 MB/s Dec 12 17:24:29.778182 kernel: raid6: int64x4 gen() 10615 MB/s Dec 12 17:24:29.778187 kernel: raid6: int64x2 gen() 8972 MB/s Dec 12 17:24:29.778192 kernel: raid6: int64x1 gen() 7006 MB/s Dec 12 17:24:29.778199 kernel: raid6: using algorithm neonx4 gen() 18575 MB/s Dec 12 17:24:29.778204 kernel: raid6: .... xor() 15128 MB/s, rmw enabled Dec 12 17:24:29.778209 kernel: raid6: using neon recovery algorithm Dec 12 17:24:29.778215 kernel: xor: measuring software checksum speed Dec 12 17:24:29.778220 kernel: 8regs : 28537 MB/sec Dec 12 17:24:29.778225 kernel: 32regs : 28745 MB/sec Dec 12 17:24:29.778230 kernel: arm64_neon : 37238 MB/sec Dec 12 17:24:29.778236 kernel: xor: using function: arm64_neon (37238 MB/sec) Dec 12 17:24:29.778242 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:24:29.778247 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (389) Dec 12 17:24:29.778253 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:24:29.778258 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:29.778263 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:24:29.778269 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:24:29.778274 kernel: loop: module loaded Dec 12 17:24:29.778280 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:24:29.778285 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:24:29.778292 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:24:29.778299 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:24:29.778305 systemd[1]: Detected virtualization microsoft. Dec 12 17:24:29.778312 systemd[1]: Detected architecture arm64. Dec 12 17:24:29.778318 systemd[1]: Running in initrd. Dec 12 17:24:29.778323 systemd[1]: No hostname configured, using default hostname. Dec 12 17:24:29.778329 systemd[1]: Hostname set to . Dec 12 17:24:29.778335 systemd[1]: Initializing machine ID from random generator. Dec 12 17:24:29.778340 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:24:29.778346 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:24:29.778353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:24:29.778358 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:24:29.778365 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:24:29.778371 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:24:29.778377 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:24:29.778383 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:24:29.778389 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:24:29.778395 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:24:29.778401 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:24:29.778406 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:24:29.778412 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:24:29.778418 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:24:29.778424 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:24:29.778430 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:24:29.778436 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:24:29.778441 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:24:29.778447 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:24:29.778453 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:24:29.778459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:24:29.778470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:24:29.778476 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:24:29.778482 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:24:29.778488 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:24:29.778494 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:24:29.778501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:24:29.778507 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:24:29.778513 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:24:29.778519 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:24:29.778525 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:24:29.778531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:24:29.778553 systemd-journald[526]: Collecting audit messages is enabled. Dec 12 17:24:29.778569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:29.778576 systemd-journald[526]: Journal started Dec 12 17:24:29.778590 systemd-journald[526]: Runtime Journal (/run/log/journal/b36cb3208f6842c58640deb8ffb30bdf) is 8M, max 78.3M, 70.3M free. Dec 12 17:24:29.813452 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:24:29.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.824802 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:24:29.846415 kernel: audit: type=1130 audit(1765560269.813:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.846451 kernel: audit: type=1130 audit(1765560269.831:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.846558 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:24:29.867105 kernel: audit: type=1130 audit(1765560269.851:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.851840 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:24:29.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.882019 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:24:29.896129 kernel: audit: type=1130 audit(1765560269.874:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:29.908878 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:24:29.910502 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:24:30.021875 kernel: Bridge firewalling registered Dec 12 17:24:30.021241 systemd-tmpfiles[538]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:24:30.021726 systemd-modules-load[529]: Inserted module 'br_netfilter' Dec 12 17:24:30.025971 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:24:30.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.039641 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:24:30.078485 kernel: audit: type=1130 audit(1765560270.038:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.078510 kernel: audit: type=1130 audit(1765560270.065:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.079667 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:24:30.105427 kernel: audit: type=1130 audit(1765560270.084:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.086825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:24:30.106745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:24:30.132727 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:24:30.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.156486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:30.178026 kernel: audit: type=1130 audit(1765560270.143:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.178048 kernel: audit: type=1130 audit(1765560270.161:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.178365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:24:30.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.189178 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:24:30.211517 kernel: audit: type=1130 audit(1765560270.187:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.214000 audit: BPF prog-id=6 op=LOAD Dec 12 17:24:30.216089 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:24:30.273527 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:24:30.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.400011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:24:30.483684 systemd-resolved[555]: Positive Trust Anchors: Dec 12 17:24:30.484409 systemd-resolved[555]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:24:30.484413 systemd-resolved[555]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:24:30.484433 systemd-resolved[555]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:24:30.539683 dracut-cmdline[567]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:24:30.540204 systemd-resolved[555]: Defaulting to hostname 'linux'. Dec 12 17:24:30.540906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:24:30.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:30.573899 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:24:30.759875 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:24:30.853906 kernel: iscsi: registered transport (tcp) Dec 12 17:24:30.909380 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:24:30.909407 kernel: QLogic iSCSI HBA Driver Dec 12 17:24:31.040372 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:24:31.059190 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:24:31.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.073426 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:24:31.118181 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:24:31.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.124819 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:24:31.145553 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:24:31.164413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:24:31.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.174000 audit: BPF prog-id=7 op=LOAD Dec 12 17:24:31.174000 audit: BPF prog-id=8 op=LOAD Dec 12 17:24:31.176713 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:24:31.295706 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:24:31.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.340523 systemd-udevd[778]: Using default interface naming scheme 'v257'. Dec 12 17:24:31.346072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:24:31.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.355807 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:24:31.385000 audit: BPF prog-id=9 op=LOAD Dec 12 17:24:31.388014 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:24:31.400757 dracut-pre-trigger[910]: rd.md=0: removing MD RAID activation Dec 12 17:24:31.425619 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:24:31.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.436784 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:24:31.441919 systemd-networkd[911]: lo: Link UP Dec 12 17:24:31.441922 systemd-networkd[911]: lo: Gained carrier Dec 12 17:24:31.454075 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:24:31.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.460852 systemd[1]: Reached target network.target - Network. Dec 12 17:24:31.503055 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:24:31.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.517933 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:24:31.625184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:24:31.625297 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:31.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.635890 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:31.662096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:31.684211 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 17:24:31.684388 kernel: hv_vmbus: registering driver hv_netvsc Dec 12 17:24:31.688597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:24:31.688685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:31.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.699397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:31.761972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:31.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:31.789630 kernel: hv_netvsc 002248c2-24d7-0022-48c2-24d7002248c2 eth0: VF slot 1 added Dec 12 17:24:31.793221 systemd-networkd[911]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:24:31.793230 systemd-networkd[911]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:24:31.794423 systemd-networkd[911]: eth0: Link UP Dec 12 17:24:31.794698 systemd-networkd[911]: eth0: Gained carrier Dec 12 17:24:31.794709 systemd-networkd[911]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:24:31.841204 systemd-networkd[911]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 12 17:24:31.853028 kernel: hv_vmbus: registering driver hv_pci Dec 12 17:24:31.853049 kernel: hv_pci 37b7b55f-12f9-4007-92db-079247a936d4: PCI VMBus probing: Using version 0x10004 Dec 12 17:24:31.865431 kernel: hv_pci 37b7b55f-12f9-4007-92db-079247a936d4: PCI host bridge to bus 12f9:00 Dec 12 17:24:31.865672 kernel: pci_bus 12f9:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 12 17:24:31.865784 kernel: pci_bus 12f9:00: No busn resource found for root bus, will use [bus 00-ff] Dec 12 17:24:31.877330 kernel: pci 12f9:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 12 17:24:31.883884 kernel: pci 12f9:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 12 17:24:31.888882 kernel: pci 12f9:00:02.0: enabling Extended Tags Dec 12 17:24:31.904903 kernel: pci 12f9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 12f9:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 12 17:24:31.915939 kernel: pci_bus 12f9:00: busn_res: [bus 00-ff] end is updated to 00 Dec 12 17:24:31.916107 kernel: pci 12f9:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 12 17:24:32.342979 kernel: mlx5_core 12f9:00:02.0: enabling device (0000 -> 0002) Dec 12 17:24:32.351026 kernel: mlx5_core 12f9:00:02.0: PTM is not supported by PCIe Dec 12 17:24:32.351224 kernel: mlx5_core 12f9:00:02.0: firmware version: 16.30.5006 Dec 12 17:24:32.525352 kernel: hv_netvsc 002248c2-24d7-0022-48c2-24d7002248c2 eth0: VF registering: eth1 Dec 12 17:24:32.525617 kernel: mlx5_core 12f9:00:02.0 eth1: joined to eth0 Dec 12 17:24:32.531975 kernel: mlx5_core 12f9:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 12 17:24:32.540150 kernel: mlx5_core 12f9:00:02.0 enP4857s1: renamed from eth1 Dec 12 17:24:32.539963 systemd-networkd[911]: eth1: Interface name change detected, renamed to enP4857s1. Dec 12 17:24:32.607092 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 12 17:24:32.613307 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:24:32.678879 kernel: mlx5_core 12f9:00:02.0 enP4857s1: Link up Dec 12 17:24:32.711881 kernel: hv_netvsc 002248c2-24d7-0022-48c2-24d7002248c2 eth0: Data path switched to VF: enP4857s1 Dec 12 17:24:32.711978 systemd-networkd[911]: enP4857s1: Link UP Dec 12 17:24:32.796049 systemd-networkd[911]: enP4857s1: Gained carrier Dec 12 17:24:32.843999 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 12 17:24:32.890940 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 12 17:24:32.918471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 12 17:24:33.029907 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:24:33.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:33.035888 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:24:33.045300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:24:33.061398 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:24:33.066586 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:24:33.098879 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:24:33.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:33.446835 systemd-networkd[911]: eth0: Gained IPv6LL Dec 12 17:24:34.041454 disk-uuid[1027]: Warning: The kernel is still using the old partition table. Dec 12 17:24:34.041454 disk-uuid[1027]: The new table will be used at the next reboot or after you Dec 12 17:24:34.041454 disk-uuid[1027]: run partprobe(8) or kpartx(8) Dec 12 17:24:34.041454 disk-uuid[1027]: The operation has completed successfully. Dec 12 17:24:34.052602 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:24:34.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:34.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:34.053903 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:24:34.066921 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:24:34.149874 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1186) Dec 12 17:24:34.160953 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:34.161018 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:34.210545 kernel: BTRFS info (device sda6): turning on async discard Dec 12 17:24:34.210602 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 17:24:34.221885 kernel: BTRFS info (device sda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:34.220887 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:24:34.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:34.226995 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:24:36.673496 ignition[1205]: Ignition 2.22.0 Dec 12 17:24:36.673513 ignition[1205]: Stage: fetch-offline Dec 12 17:24:36.677381 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:24:36.692009 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 17:24:36.692030 kernel: audit: type=1130 audit(1765560276.684:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.673619 ignition[1205]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:36.673629 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:24:36.695016 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 17:24:36.673707 ignition[1205]: parsed url from cmdline: "" Dec 12 17:24:36.673710 ignition[1205]: no config URL provided Dec 12 17:24:36.673714 ignition[1205]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:24:36.673722 ignition[1205]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:24:36.673726 ignition[1205]: failed to fetch config: resource requires networking Dec 12 17:24:36.674037 ignition[1205]: Ignition finished successfully Dec 12 17:24:36.732491 ignition[1211]: Ignition 2.22.0 Dec 12 17:24:36.732496 ignition[1211]: Stage: fetch Dec 12 17:24:36.733301 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:36.733310 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:24:36.733389 ignition[1211]: parsed url from cmdline: "" Dec 12 17:24:36.733393 ignition[1211]: no config URL provided Dec 12 17:24:36.733396 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:24:36.733401 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:24:36.733417 ignition[1211]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 12 17:24:36.868020 ignition[1211]: GET result: OK Dec 12 17:24:36.868085 ignition[1211]: config has been read from IMDS userdata Dec 12 17:24:36.871046 unknown[1211]: fetched base config from "system" Dec 12 17:24:36.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.868100 ignition[1211]: parsing config with SHA512: 867cc719cf07b42665c64fe65ce24787c94f5a4f33b80229f9d6a12f1f636a8d53a1e0c37189e580432504a952ecf4d49d1d6efa350296f454924050787b1bff Dec 12 17:24:36.871052 unknown[1211]: fetched base config from "system" Dec 12 17:24:36.903746 kernel: audit: type=1130 audit(1765560276.880:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.871299 ignition[1211]: fetch: fetch complete Dec 12 17:24:36.871055 unknown[1211]: fetched user config from "azure" Dec 12 17:24:36.871303 ignition[1211]: fetch: fetch passed Dec 12 17:24:36.876234 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 17:24:36.871340 ignition[1211]: Ignition finished successfully Dec 12 17:24:36.883646 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:24:36.928352 ignition[1218]: Ignition 2.22.0 Dec 12 17:24:36.928361 ignition[1218]: Stage: kargs Dec 12 17:24:36.928572 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:36.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.934830 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:24:36.928578 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:24:36.972382 kernel: audit: type=1130 audit(1765560276.939:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.967883 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:24:36.932011 ignition[1218]: kargs: kargs passed Dec 12 17:24:36.932066 ignition[1218]: Ignition finished successfully Dec 12 17:24:36.984294 ignition[1224]: Ignition 2.22.0 Dec 12 17:24:36.984306 ignition[1224]: Stage: disks Dec 12 17:24:37.004956 kernel: audit: type=1130 audit(1765560276.991:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:36.986823 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:24:36.984561 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:36.992440 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:24:36.984569 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:24:37.009717 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:24:36.985227 ignition[1224]: disks: disks passed Dec 12 17:24:37.019819 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:24:36.985280 ignition[1224]: Ignition finished successfully Dec 12 17:24:37.028187 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:24:37.037273 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:24:37.047186 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:24:37.272752 systemd-fsck[1233]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 12 17:24:37.282144 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:24:37.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:37.310027 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:24:37.317850 kernel: audit: type=1130 audit(1765560277.290:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:37.858873 kernel: EXT4-fs (sda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:24:37.859790 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:24:37.863618 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:24:37.943472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:24:37.961522 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:24:37.970178 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 12 17:24:37.981311 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:24:38.009231 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1248) Dec 12 17:24:38.009254 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:38.009262 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:37.981350 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:24:37.994046 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:24:38.017347 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:24:38.040796 kernel: BTRFS info (device sda6): turning on async discard Dec 12 17:24:38.040840 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 17:24:38.042169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:24:39.308277 coreos-metadata[1250]: Dec 12 17:24:39.307 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 12 17:24:39.316480 coreos-metadata[1250]: Dec 12 17:24:39.316 INFO Fetch successful Dec 12 17:24:39.320389 coreos-metadata[1250]: Dec 12 17:24:39.317 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 12 17:24:39.328751 coreos-metadata[1250]: Dec 12 17:24:39.326 INFO Fetch successful Dec 12 17:24:39.362901 coreos-metadata[1250]: Dec 12 17:24:39.362 INFO wrote hostname ci-4515.1.0-a-74f46d5ce1 to /sysroot/etc/hostname Dec 12 17:24:39.370145 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 17:24:39.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.392924 kernel: audit: type=1130 audit(1765560279.375:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:39.834889 initrd-setup-root[1279]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:24:39.926309 initrd-setup-root[1286]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:24:39.932336 initrd-setup-root[1293]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:24:39.937165 initrd-setup-root[1300]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:24:41.804929 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:24:41.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:41.827765 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:24:41.834954 kernel: audit: type=1130 audit(1765560281.810:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:41.851457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:24:41.905676 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:24:41.916241 kernel: BTRFS info (device sda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:41.927232 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:24:41.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:41.950136 kernel: audit: type=1130 audit(1765560281.932:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:41.953559 ignition[1371]: INFO : Ignition 2.22.0 Dec 12 17:24:41.957939 ignition[1371]: INFO : Stage: mount Dec 12 17:24:41.957939 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:41.957939 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:24:41.957939 ignition[1371]: INFO : mount: mount passed Dec 12 17:24:41.957939 ignition[1371]: INFO : Ignition finished successfully Dec 12 17:24:42.004235 kernel: audit: type=1130 audit(1765560281.967:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:41.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:41.962913 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:24:41.969274 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:24:42.008723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:24:42.042451 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1380) Dec 12 17:24:42.042517 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:24:42.047317 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:24:42.057065 kernel: BTRFS info (device sda6): turning on async discard Dec 12 17:24:42.057088 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 17:24:42.058901 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:24:42.088665 ignition[1397]: INFO : Ignition 2.22.0 Dec 12 17:24:42.088665 ignition[1397]: INFO : Stage: files Dec 12 17:24:42.095348 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:42.095348 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:24:42.095348 ignition[1397]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:24:42.095348 ignition[1397]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:24:42.095348 ignition[1397]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:24:42.246430 ignition[1397]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:24:42.253952 ignition[1397]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:24:42.253952 ignition[1397]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:24:42.246849 unknown[1397]: wrote ssh authorized keys file for user: core Dec 12 17:24:42.328876 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:24:42.337391 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 12 17:24:42.446975 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:24:42.619631 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:24:42.619631 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:24:42.634748 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:24:42.634748 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:24:42.634748 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:24:42.634748 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:24:42.634748 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:24:42.634748 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:24:42.634748 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:24:42.685532 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:24:42.685532 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:24:42.685532 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:24:42.685532 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:24:42.685532 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:24:42.685532 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 12 17:24:43.192214 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:24:43.453438 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:24:43.453438 ignition[1397]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:24:43.527993 ignition[1397]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:24:43.537539 ignition[1397]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:24:43.537539 ignition[1397]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:24:43.537539 ignition[1397]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:24:43.537539 ignition[1397]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:24:43.537539 ignition[1397]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:24:43.537539 ignition[1397]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:24:43.537539 ignition[1397]: INFO : files: files passed Dec 12 17:24:43.537539 ignition[1397]: INFO : Ignition finished successfully Dec 12 17:24:43.613133 kernel: audit: type=1130 audit(1765560283.550:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.539149 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:24:43.567533 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:24:43.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.600624 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:24:43.656951 kernel: audit: type=1130 audit(1765560283.621:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.656979 kernel: audit: type=1131 audit(1765560283.621:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.614112 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:24:43.614200 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:24:43.668182 initrd-setup-root-after-ignition[1428]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:24:43.668182 initrd-setup-root-after-ignition[1428]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:24:43.682219 initrd-setup-root-after-ignition[1432]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:24:43.682744 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:24:43.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.694669 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:24:43.723911 kernel: audit: type=1130 audit(1765560283.694:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.724084 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:24:43.763616 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:24:43.763733 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:24:43.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.773783 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:24:43.814732 kernel: audit: type=1130 audit(1765560283.773:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.814754 kernel: audit: type=1131 audit(1765560283.773:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.797492 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:24:43.820009 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:24:43.824016 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:24:43.856451 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:24:43.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.883878 kernel: audit: type=1130 audit(1765560283.861:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.881934 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:24:43.908382 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:24:43.908553 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:24:43.914446 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:24:43.925782 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:24:43.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:43.934533 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:24:43.934665 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:24:43.948784 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:24:43.953629 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:24:43.962180 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:24:43.972031 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:24:43.981685 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:24:43.991066 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:24:44.000789 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:24:44.011670 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:24:44.022049 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:24:44.031290 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:24:44.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.040294 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:24:44.049574 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:24:44.049693 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:24:44.065138 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:24:44.070519 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:24:44.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.079442 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:24:44.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.087876 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:24:44.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.094015 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:24:44.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.094125 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:24:44.109405 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:24:44.109510 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:24:44.115638 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:24:44.115712 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:24:44.124656 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 12 17:24:44.124743 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 17:24:44.142060 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:24:44.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.168809 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:24:44.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.216968 ignition[1452]: INFO : Ignition 2.22.0 Dec 12 17:24:44.216968 ignition[1452]: INFO : Stage: umount Dec 12 17:24:44.216968 ignition[1452]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:24:44.216968 ignition[1452]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:24:44.216968 ignition[1452]: INFO : umount: umount passed Dec 12 17:24:44.216968 ignition[1452]: INFO : Ignition finished successfully Dec 12 17:24:44.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.182842 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:24:44.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.183009 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:24:44.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.201251 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:24:44.201352 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:24:44.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.211752 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:24:44.211873 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:24:44.227104 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:24:44.229263 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:24:44.241962 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:24:44.242088 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:24:44.252109 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:24:44.252177 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:24:44.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.259337 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:24:44.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.259378 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:24:44.267946 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 17:24:44.267990 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 17:24:44.276543 systemd[1]: Stopped target network.target - Network. Dec 12 17:24:44.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.285213 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:24:44.285288 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:24:44.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.294703 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:24:44.303264 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:24:44.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.446000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:24:44.446000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:24:44.307327 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:24:44.312560 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:24:44.323640 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:24:44.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.335650 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:24:44.335700 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:24:44.343326 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:24:44.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.343351 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:24:44.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.353458 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:24:44.353474 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:24:44.363503 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:24:44.363565 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:24:44.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.372561 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:24:44.372599 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:24:44.383802 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:24:44.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.391918 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:24:44.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.406460 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:24:44.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.407013 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:24:44.606267 kernel: hv_netvsc 002248c2-24d7-0022-48c2-24d7002248c2 eth0: Data path switched from VF: enP4857s1 Dec 12 17:24:44.407101 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:24:44.420383 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:24:44.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.420465 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:24:44.435332 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:24:44.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.437385 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:24:44.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.442683 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:24:44.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.452213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:24:44.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.452260 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:24:44.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.464186 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:24:44.464275 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:24:44.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:44.478021 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:24:44.487321 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:24:44.487406 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:24:44.496735 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:24:44.496791 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:24:44.505682 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:24:44.505727 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:24:44.514001 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:24:44.532266 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:24:44.532464 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:24:44.541455 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:24:44.541500 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:24:44.550297 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:24:44.550337 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:24:44.560301 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:24:44.560363 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:24:44.574236 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:24:44.574301 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:24:44.583265 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:24:44.583310 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:24:44.599845 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:24:44.616438 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:24:44.616525 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:24:44.622396 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:24:44.622443 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:24:44.636911 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:24:44.636976 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:24:44.646994 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:24:44.647050 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:24:44.656324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:24:44.656370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:44.666358 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:24:44.666462 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:24:44.681171 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:24:44.681304 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:24:44.689532 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:24:44.703039 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:24:44.739451 systemd[1]: Switching root. Dec 12 17:24:44.966266 systemd-journald[526]: Journal stopped Dec 12 17:24:53.474657 systemd-journald[526]: Received SIGTERM from PID 1 (systemd). Dec 12 17:24:53.474683 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:24:53.474694 kernel: SELinux: policy capability open_perms=1 Dec 12 17:24:53.474702 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:24:53.474708 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:24:53.474713 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:24:53.474720 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:24:53.474727 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:24:53.474733 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:24:53.474740 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:24:53.474746 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 12 17:24:53.474752 kernel: audit: type=1403 audit(1765560286.836:91): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:24:53.474758 systemd[1]: Successfully loaded SELinux policy in 276.389ms. Dec 12 17:24:53.474765 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.607ms. Dec 12 17:24:53.474774 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:24:53.474781 systemd[1]: Detected virtualization microsoft. Dec 12 17:24:53.474787 systemd[1]: Detected architecture arm64. Dec 12 17:24:53.474793 systemd[1]: Detected first boot. Dec 12 17:24:53.474801 systemd[1]: Hostname set to . Dec 12 17:24:53.474808 systemd[1]: Initializing machine ID from random generator. Dec 12 17:24:53.474814 kernel: audit: type=1334 audit(1765560288.446:92): prog-id=10 op=LOAD Dec 12 17:24:53.474820 kernel: audit: type=1334 audit(1765560288.446:93): prog-id=10 op=UNLOAD Dec 12 17:24:53.474826 kernel: audit: type=1334 audit(1765560288.450:94): prog-id=11 op=LOAD Dec 12 17:24:53.474832 kernel: audit: type=1334 audit(1765560288.450:95): prog-id=11 op=UNLOAD Dec 12 17:24:53.474838 zram_generator::config[1495]: No configuration found. Dec 12 17:24:53.474847 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:24:53.475245 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:24:53.475265 kernel: audit: type=1334 audit(1765560292.506:96): prog-id=12 op=LOAD Dec 12 17:24:53.475272 kernel: audit: type=1334 audit(1765560292.506:97): prog-id=3 op=UNLOAD Dec 12 17:24:53.475279 kernel: audit: type=1334 audit(1765560292.510:98): prog-id=13 op=LOAD Dec 12 17:24:53.475285 kernel: audit: type=1334 audit(1765560292.514:99): prog-id=14 op=LOAD Dec 12 17:24:53.475295 kernel: audit: type=1334 audit(1765560292.514:100): prog-id=4 op=UNLOAD Dec 12 17:24:53.475301 kernel: audit: type=1334 audit(1765560292.514:101): prog-id=5 op=UNLOAD Dec 12 17:24:53.475309 kernel: audit: type=1131 audit(1765560292.519:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.475319 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:24:53.475326 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:24:53.475333 kernel: audit: type=1334 audit(1765560292.552:103): prog-id=12 op=UNLOAD Dec 12 17:24:53.475340 kernel: audit: type=1130 audit(1765560292.560:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.475347 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:24:53.475355 kernel: audit: type=1131 audit(1765560292.560:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.475365 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:24:53.475372 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:24:53.475379 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:24:53.475386 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:24:53.475393 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:24:53.475400 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:24:53.475407 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:24:53.475414 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:24:53.475420 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:24:53.475428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:24:53.475435 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:24:53.475442 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:24:53.475449 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:24:53.475456 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:24:53.475463 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:24:53.475470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:24:53.475478 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:24:53.475485 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:24:53.475492 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:24:53.475499 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:24:53.475506 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:24:53.475513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:24:53.475521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:24:53.475527 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:24:53.475534 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:24:53.475541 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:24:53.475548 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:24:53.475555 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:24:53.475563 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:24:53.475573 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:24:53.475579 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:24:53.475586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:24:53.475594 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:24:53.475601 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:24:53.475608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:24:53.475615 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:24:53.475622 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:24:53.475628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:24:53.475635 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:24:53.475643 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:24:53.475650 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:24:53.475657 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:24:53.475664 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:24:53.475671 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:24:53.475678 systemd[1]: Reached target machines.target - Containers. Dec 12 17:24:53.475684 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:24:53.475692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:53.475699 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:24:53.475706 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:24:53.475713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:24:53.475720 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:24:53.475727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:24:53.475735 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:24:53.475742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:24:53.475749 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:24:53.475756 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:24:53.475762 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:24:53.475769 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:24:53.475776 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:24:53.475784 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:53.475798 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:24:53.475804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:24:53.475811 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:24:53.475818 kernel: fuse: init (API version 7.41) Dec 12 17:24:53.475824 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:24:53.475831 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:24:53.475839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:24:53.475846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:24:53.475876 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:24:53.475886 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:24:53.475893 kernel: ACPI: bus type drm_connector registered Dec 12 17:24:53.475900 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:24:53.475907 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:24:53.475915 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:24:53.475922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:24:53.475957 systemd-journald[1576]: Collecting audit messages is enabled. Dec 12 17:24:53.475974 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:24:53.475982 systemd-journald[1576]: Journal started Dec 12 17:24:53.476004 systemd-journald[1576]: Runtime Journal (/run/log/journal/8e15db53505c4886ae99dac54efccb9f) is 8M, max 78.3M, 70.3M free. Dec 12 17:24:53.476043 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:24:52.883000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 17:24:53.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.293000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:24:53.293000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:24:53.294000 audit: BPF prog-id=15 op=LOAD Dec 12 17:24:53.294000 audit: BPF prog-id=16 op=LOAD Dec 12 17:24:53.294000 audit: BPF prog-id=17 op=LOAD Dec 12 17:24:53.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.470000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:24:53.470000 audit[1576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd45f2b80 a2=4000 a3=0 items=0 ppid=1 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:53.470000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:24:52.502948 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:24:52.515912 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 17:24:52.520019 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:24:52.520348 systemd[1]: systemd-journald.service: Consumed 2.658s CPU time. Dec 12 17:24:53.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.494304 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:24:53.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.498917 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:24:53.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.504093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:24:53.504227 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:24:53.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.508905 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:24:53.509028 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:24:53.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.515391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:24:53.515522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:24:53.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.520782 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:24:53.520944 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:24:53.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.525507 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:24:53.525626 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:24:53.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.532367 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:24:53.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.537397 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:24:53.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.543481 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:24:53.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.549446 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:24:53.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.555295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:24:53.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.569527 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:24:53.574967 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:24:53.581551 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:24:53.591975 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:24:53.597091 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:24:53.597209 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:24:53.602712 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:24:53.609174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:53.609360 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:24:53.611995 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:24:53.623608 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:24:53.629502 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:24:53.630447 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:24:53.636652 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:24:53.637530 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:24:53.643922 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:24:53.650669 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:24:53.657654 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:24:53.662871 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:24:53.681465 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:24:53.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.686542 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:24:53.692750 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:24:53.713759 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:24:53.714632 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:24:53.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.750998 systemd-journald[1576]: Time spent on flushing to /var/log/journal/8e15db53505c4886ae99dac54efccb9f is 11.295ms for 1093 entries. Dec 12 17:24:53.750998 systemd-journald[1576]: System Journal (/var/log/journal/8e15db53505c4886ae99dac54efccb9f) is 8M, max 2.2G, 2.2G free. Dec 12 17:24:53.792904 systemd-journald[1576]: Received client request to flush runtime journal. Dec 12 17:24:53.792965 kernel: loop1: detected capacity change from 0 to 27736 Dec 12 17:24:53.794334 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:24:53.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.803101 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:24:53.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.981935 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Dec 12 17:24:53.981947 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Dec 12 17:24:53.984652 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:24:53.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:53.992387 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:24:54.353358 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:24:54.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:54.357000 audit: BPF prog-id=18 op=LOAD Dec 12 17:24:54.357000 audit: BPF prog-id=19 op=LOAD Dec 12 17:24:54.357000 audit: BPF prog-id=20 op=LOAD Dec 12 17:24:54.362012 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:24:54.368000 audit: BPF prog-id=21 op=LOAD Dec 12 17:24:54.371021 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:24:54.379299 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:24:54.397187 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Dec 12 17:24:54.397455 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Dec 12 17:24:54.401023 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:24:54.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:54.505000 audit: BPF prog-id=22 op=LOAD Dec 12 17:24:54.505000 audit: BPF prog-id=23 op=LOAD Dec 12 17:24:54.505000 audit: BPF prog-id=24 op=LOAD Dec 12 17:24:54.509042 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:24:54.517000 audit: BPF prog-id=25 op=LOAD Dec 12 17:24:54.517000 audit: BPF prog-id=26 op=LOAD Dec 12 17:24:54.517000 audit: BPF prog-id=27 op=LOAD Dec 12 17:24:54.518940 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:24:54.594834 systemd-nsresourced[1657]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:24:54.596510 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:24:54.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:54.630339 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:24:54.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:54.711886 kernel: loop2: detected capacity change from 0 to 100192 Dec 12 17:24:54.746377 systemd-oomd[1652]: No swap; memory pressure usage will be degraded Dec 12 17:24:54.747034 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:24:54.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:54.795937 systemd-resolved[1653]: Positive Trust Anchors: Dec 12 17:24:54.795956 systemd-resolved[1653]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:24:54.795959 systemd-resolved[1653]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:24:54.795978 systemd-resolved[1653]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:24:54.997290 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:24:55.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:55.003000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:24:55.003000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:24:55.004000 audit: BPF prog-id=28 op=LOAD Dec 12 17:24:55.004000 audit: BPF prog-id=29 op=LOAD Dec 12 17:24:55.005656 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:24:55.035374 systemd-udevd[1675]: Using default interface naming scheme 'v257'. Dec 12 17:24:55.038601 systemd-resolved[1653]: Using system hostname 'ci-4515.1.0-a-74f46d5ce1'. Dec 12 17:24:55.039740 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:24:55.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:55.044278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:24:55.573457 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:24:55.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:55.583000 audit: BPF prog-id=30 op=LOAD Dec 12 17:24:55.585195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:24:55.640912 kernel: loop3: detected capacity change from 0 to 109872 Dec 12 17:24:55.648276 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:24:55.711919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#188 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 17:24:55.723904 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 17:24:55.780318 systemd-networkd[1689]: lo: Link UP Dec 12 17:24:55.780329 systemd-networkd[1689]: lo: Gained carrier Dec 12 17:24:55.781895 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:24:55.786424 systemd-networkd[1689]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:24:55.786436 systemd-networkd[1689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:24:55.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:55.788137 systemd[1]: Reached target network.target - Network. Dec 12 17:24:55.795015 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:24:55.804493 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:24:55.822879 kernel: hv_vmbus: registering driver hv_balloon Dec 12 17:24:55.822990 kernel: hv_vmbus: registering driver hyperv_fb Dec 12 17:24:55.823007 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 12 17:24:55.823023 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 12 17:24:55.823034 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 12 17:24:55.828877 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 12 17:24:55.834848 kernel: Console: switching to colour dummy device 80x25 Dec 12 17:24:55.842131 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 17:24:55.876124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:55.880232 kernel: mlx5_core 12f9:00:02.0 enP4857s1: Link up Dec 12 17:24:55.884113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:24:55.884369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:55.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:55.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:55.891152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:24:55.903872 kernel: hv_netvsc 002248c2-24d7-0022-48c2-24d7002248c2 eth0: Data path switched to VF: enP4857s1 Dec 12 17:24:55.904906 systemd-networkd[1689]: enP4857s1: Link UP Dec 12 17:24:55.905152 systemd-networkd[1689]: eth0: Link UP Dec 12 17:24:55.905379 systemd-networkd[1689]: eth0: Gained carrier Dec 12 17:24:55.905464 systemd-networkd[1689]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:24:55.909192 systemd-networkd[1689]: enP4857s1: Gained carrier Dec 12 17:24:55.914912 systemd-networkd[1689]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 12 17:24:55.978606 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:24:55.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:56.035931 kernel: MACsec IEEE 802.1AE Dec 12 17:24:56.167865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 12 17:24:56.174255 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:24:56.260781 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:24:56.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:56.571888 kernel: loop4: detected capacity change from 0 to 207008 Dec 12 17:24:56.607878 kernel: loop5: detected capacity change from 0 to 27736 Dec 12 17:24:56.621904 kernel: loop6: detected capacity change from 0 to 100192 Dec 12 17:24:56.635882 kernel: loop7: detected capacity change from 0 to 109872 Dec 12 17:24:56.649937 kernel: loop1: detected capacity change from 0 to 207008 Dec 12 17:24:56.663516 (sd-merge)[1806]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 12 17:24:56.666110 (sd-merge)[1806]: Merged extensions into '/usr'. Dec 12 17:24:56.668870 systemd[1]: Reload requested from client PID 1635 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:24:56.669093 systemd[1]: Reloading... Dec 12 17:24:56.726968 zram_generator::config[1838]: No configuration found. Dec 12 17:24:56.928561 systemd[1]: Reloading finished in 259 ms. Dec 12 17:24:56.946360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:24:56.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:56.953890 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:24:56.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:56.964967 systemd[1]: Starting ensure-sysext.service... Dec 12 17:24:56.970004 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:24:56.975000 audit: BPF prog-id=31 op=LOAD Dec 12 17:24:56.975000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:24:56.975000 audit: BPF prog-id=32 op=LOAD Dec 12 17:24:56.975000 audit: BPF prog-id=33 op=LOAD Dec 12 17:24:56.975000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:24:56.975000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:24:56.975000 audit: BPF prog-id=34 op=LOAD Dec 12 17:24:56.975000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:24:56.976000 audit: BPF prog-id=35 op=LOAD Dec 12 17:24:56.976000 audit: BPF prog-id=36 op=LOAD Dec 12 17:24:56.976000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:24:56.976000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:24:56.976000 audit: BPF prog-id=37 op=LOAD Dec 12 17:24:56.976000 audit: BPF prog-id=38 op=LOAD Dec 12 17:24:56.976000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:24:56.976000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:24:56.976000 audit: BPF prog-id=39 op=LOAD Dec 12 17:24:56.976000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:24:56.978000 audit: BPF prog-id=40 op=LOAD Dec 12 17:24:56.978000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:24:56.978000 audit: BPF prog-id=41 op=LOAD Dec 12 17:24:56.978000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:24:56.978000 audit: BPF prog-id=42 op=LOAD Dec 12 17:24:56.978000 audit: BPF prog-id=43 op=LOAD Dec 12 17:24:56.978000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:24:56.978000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:24:56.979000 audit: BPF prog-id=44 op=LOAD Dec 12 17:24:56.979000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:24:56.979000 audit: BPF prog-id=45 op=LOAD Dec 12 17:24:56.979000 audit: BPF prog-id=46 op=LOAD Dec 12 17:24:56.979000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:24:56.979000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:24:56.983782 systemd[1]: Reload requested from client PID 1898 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:24:56.983797 systemd[1]: Reloading... Dec 12 17:24:57.027678 systemd-tmpfiles[1899]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:24:57.028047 systemd-tmpfiles[1899]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:24:57.028351 systemd-tmpfiles[1899]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:24:57.029680 systemd-tmpfiles[1899]: ACLs are not supported, ignoring. Dec 12 17:24:57.030047 systemd-tmpfiles[1899]: ACLs are not supported, ignoring. Dec 12 17:24:57.033913 zram_generator::config[1929]: No configuration found. Dec 12 17:24:57.071340 systemd-tmpfiles[1899]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:24:57.071571 systemd-tmpfiles[1899]: Skipping /boot Dec 12 17:24:57.080552 systemd-tmpfiles[1899]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:24:57.080564 systemd-tmpfiles[1899]: Skipping /boot Dec 12 17:24:57.207510 systemd[1]: Reloading finished in 223 ms. Dec 12 17:24:57.233353 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:24:57.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.240000 audit: BPF prog-id=47 op=LOAD Dec 12 17:24:57.240000 audit: BPF prog-id=48 op=LOAD Dec 12 17:24:57.240000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:24:57.240000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:24:57.240000 audit: BPF prog-id=49 op=LOAD Dec 12 17:24:57.240000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:24:57.240000 audit: BPF prog-id=50 op=LOAD Dec 12 17:24:57.240000 audit: BPF prog-id=51 op=LOAD Dec 12 17:24:57.240000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:24:57.240000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:24:57.241000 audit: BPF prog-id=52 op=LOAD Dec 12 17:24:57.241000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:24:57.241000 audit: BPF prog-id=53 op=LOAD Dec 12 17:24:57.241000 audit: BPF prog-id=54 op=LOAD Dec 12 17:24:57.241000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:24:57.241000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:24:57.241000 audit: BPF prog-id=55 op=LOAD Dec 12 17:24:57.241000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:24:57.241000 audit: BPF prog-id=56 op=LOAD Dec 12 17:24:57.241000 audit: BPF prog-id=57 op=LOAD Dec 12 17:24:57.241000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:24:57.241000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:24:57.242000 audit: BPF prog-id=58 op=LOAD Dec 12 17:24:57.242000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:24:57.243000 audit: BPF prog-id=59 op=LOAD Dec 12 17:24:57.243000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:24:57.243000 audit: BPF prog-id=60 op=LOAD Dec 12 17:24:57.243000 audit: BPF prog-id=61 op=LOAD Dec 12 17:24:57.243000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:24:57.243000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:24:57.243000 audit: BPF prog-id=62 op=LOAD Dec 12 17:24:57.243000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:24:57.255752 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:24:57.268833 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:24:57.275073 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:24:57.287216 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:24:57.295081 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:24:57.302383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:57.303000 audit[1994]: SYSTEM_BOOT pid=1994 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.304289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:24:57.317512 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:24:57.324072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:24:57.328337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:57.328626 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:24:57.328837 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:57.330020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:24:57.333307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:24:57.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.338932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:24:57.339115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:24:57.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.344576 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:24:57.344756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:24:57.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.355167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:57.356425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:24:57.366589 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:24:57.380109 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:24:57.384268 systemd-networkd[1689]: eth0: Gained IPv6LL Dec 12 17:24:57.385479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:57.385719 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:24:57.385931 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:57.387164 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:24:57.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.393350 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:24:57.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.399655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:24:57.400108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:24:57.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.406040 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:24:57.406310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:24:57.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.413844 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:24:57.414163 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:24:57.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.425658 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:24:57.430626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:24:57.431812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:24:57.444873 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:24:57.451113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:24:57.459073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:24:57.465118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:24:57.465281 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:24:57.465361 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:24:57.465464 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:24:57.472896 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:24:57.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.478709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:24:57.478911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:24:57.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.484732 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:24:57.484897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:24:57.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.490098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:24:57.490257 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:24:57.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.495979 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:24:57.496128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:24:57.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.504155 systemd[1]: Finished ensure-sysext.service. Dec 12 17:24:57.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:24:57.510173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:24:57.510248 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:24:58.037228 kernel: kauditd_printk_skb: 156 callbacks suppressed Dec 12 17:24:58.037353 kernel: audit: type=1305 audit(1765560298.033:260): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:24:58.033000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:24:58.037423 augenrules[2038]: No rules Dec 12 17:24:58.038940 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:24:58.039465 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:24:58.033000 audit[2038]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcfa97840 a2=420 a3=0 items=0 ppid=1990 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:58.065872 kernel: audit: type=1300 audit(1765560298.033:260): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcfa97840 a2=420 a3=0 items=0 ppid=1990 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:24:58.033000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:24:58.074259 kernel: audit: type=1327 audit(1765560298.033:260): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:24:59.063467 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:24:59.069296 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:25:09.705539 ldconfig[1992]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:25:09.719364 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:25:09.725831 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:25:09.786473 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:25:09.791778 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:25:09.796369 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:25:09.801573 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:25:09.807539 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:25:09.812127 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:25:09.817332 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:25:09.822842 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:25:09.827382 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:25:09.832578 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:25:09.832607 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:25:09.836860 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:25:09.905285 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:25:09.911133 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:25:09.916696 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:25:09.921920 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:25:09.927391 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:25:09.934044 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:25:09.938650 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:25:09.944308 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:25:09.949216 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:25:09.953375 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:25:09.957321 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:25:09.957349 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:25:09.959504 systemd[1]: Starting chronyd.service - NTP client/server... Dec 12 17:25:09.969965 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:25:09.976611 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 17:25:09.989443 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:25:09.995474 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:25:10.004057 chronyd[2050]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 12 17:25:10.008569 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:25:10.013972 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:25:10.018323 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:25:10.021023 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 12 17:25:10.021246 jq[2058]: false Dec 12 17:25:10.025732 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 12 17:25:10.026558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:10.029111 KVP[2060]: KVP starting; pid is:2060 Dec 12 17:25:10.034492 KVP[2060]: KVP LIC Version: 3.1 Dec 12 17:25:10.034948 kernel: hv_utils: KVP IC version 4.0 Dec 12 17:25:10.036987 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:25:10.040770 chronyd[2050]: Timezone right/UTC failed leap second check, ignoring Dec 12 17:25:10.041378 chronyd[2050]: Loaded seccomp filter (level 2) Dec 12 17:25:10.041628 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:25:10.051500 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:25:10.059218 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:25:10.066974 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:25:10.074602 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:25:10.080617 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:25:10.081127 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:25:10.081584 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:25:10.089576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:25:10.096015 systemd[1]: Started chronyd.service - NTP client/server. Dec 12 17:25:10.102553 jq[2076]: true Dec 12 17:25:10.104300 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:25:10.111269 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:25:10.111474 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:25:10.128976 jq[2090]: true Dec 12 17:25:10.135181 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:25:10.136392 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:25:10.151516 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:25:10.151940 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:25:10.159984 extend-filesystems[2059]: Found /dev/sda6 Dec 12 17:25:10.164571 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:25:10.200893 extend-filesystems[2059]: Found /dev/sda9 Dec 12 17:25:10.205442 extend-filesystems[2059]: Checking size of /dev/sda9 Dec 12 17:25:10.213911 systemd-logind[2072]: New seat seat0. Dec 12 17:25:10.215031 systemd-logind[2072]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 12 17:25:10.215372 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:25:10.234076 update_engine[2075]: I20251212 17:25:10.229790 2075 main.cc:92] Flatcar Update Engine starting Dec 12 17:25:10.246196 tar[2089]: linux-arm64/LICENSE Dec 12 17:25:10.246196 tar[2089]: linux-arm64/helm Dec 12 17:25:10.259510 extend-filesystems[2059]: Resized partition /dev/sda9 Dec 12 17:25:10.301458 bash[2126]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:25:10.303965 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:25:10.313622 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:25:10.340123 extend-filesystems[2144]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:25:10.368831 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Dec 12 17:25:10.369887 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Dec 12 17:25:10.432392 extend-filesystems[2144]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 17:25:10.432392 extend-filesystems[2144]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 12 17:25:10.432392 extend-filesystems[2144]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Dec 12 17:25:10.438888 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:25:10.497900 update_engine[2075]: I20251212 17:25:10.492796 2075 update_check_scheduler.cc:74] Next update check in 7m27s Dec 12 17:25:10.497964 extend-filesystems[2059]: Resized filesystem in /dev/sda9 Dec 12 17:25:10.482515 dbus-daemon[2053]: [system] SELinux support is enabled Dec 12 17:25:10.439131 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:25:10.511242 dbus-daemon[2053]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 17:25:10.483456 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:25:10.509603 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:25:10.509629 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:25:10.517157 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:25:10.517172 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:25:10.524434 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:25:10.538565 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:25:10.559058 sshd_keygen[2074]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:25:10.585805 coreos-metadata[2052]: Dec 12 17:25:10.585 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 12 17:25:10.590139 coreos-metadata[2052]: Dec 12 17:25:10.590 INFO Fetch successful Dec 12 17:25:10.590139 coreos-metadata[2052]: Dec 12 17:25:10.590 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 12 17:25:10.597946 coreos-metadata[2052]: Dec 12 17:25:10.597 INFO Fetch successful Dec 12 17:25:10.598240 coreos-metadata[2052]: Dec 12 17:25:10.598 INFO Fetching http://168.63.129.16/machine/dd4ed16e-252a-4bda-bff2-a7b62a422828/a607dab5%2De58d%2D4895%2Dbfa2%2De11f97bc0ab2.%5Fci%2D4515.1.0%2Da%2D74f46d5ce1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 12 17:25:10.598720 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:25:10.605603 coreos-metadata[2052]: Dec 12 17:25:10.603 INFO Fetch successful Dec 12 17:25:10.605603 coreos-metadata[2052]: Dec 12 17:25:10.604 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 12 17:25:10.608099 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:25:10.616058 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 12 17:25:10.622422 coreos-metadata[2052]: Dec 12 17:25:10.621 INFO Fetch successful Dec 12 17:25:10.645852 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:25:10.648071 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:25:10.657540 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:25:10.678816 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:25:10.687497 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 12 17:25:10.697477 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:25:10.706366 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:25:10.713638 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:25:10.720037 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 17:25:10.727736 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:25:10.765505 tar[2089]: linux-arm64/README.md Dec 12 17:25:10.781847 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:25:10.908097 locksmithd[2197]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:25:11.004928 containerd[2108]: time="2025-12-12T17:25:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:25:11.006066 containerd[2108]: time="2025-12-12T17:25:11.005795204Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:25:11.012575 containerd[2108]: time="2025-12-12T17:25:11.012546772Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.728µs" Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.012670476Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.012710948Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.012727796Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.012873012Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.012886628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.012924004Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.012930548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.013076924Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.013087476Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.013093700Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.013098540Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013751 containerd[2108]: time="2025-12-12T17:25:11.013238308Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013977 containerd[2108]: time="2025-12-12T17:25:11.013249548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013977 containerd[2108]: time="2025-12-12T17:25:11.013298932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013977 containerd[2108]: time="2025-12-12T17:25:11.013412988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013977 containerd[2108]: time="2025-12-12T17:25:11.013430644Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:25:11.013977 containerd[2108]: time="2025-12-12T17:25:11.013453108Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:25:11.013977 containerd[2108]: time="2025-12-12T17:25:11.013474844Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:25:11.013977 containerd[2108]: time="2025-12-12T17:25:11.013603724Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:25:11.013977 containerd[2108]: time="2025-12-12T17:25:11.013656476Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:25:11.030424 containerd[2108]: time="2025-12-12T17:25:11.030357732Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:25:11.030535 containerd[2108]: time="2025-12-12T17:25:11.030521356Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:25:11.030719 containerd[2108]: time="2025-12-12T17:25:11.030701044Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:25:11.030796 containerd[2108]: time="2025-12-12T17:25:11.030781316Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:25:11.030871 containerd[2108]: time="2025-12-12T17:25:11.030846508Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:25:11.030920 containerd[2108]: time="2025-12-12T17:25:11.030907732Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:25:11.030977 containerd[2108]: time="2025-12-12T17:25:11.030965780Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:25:11.031020 containerd[2108]: time="2025-12-12T17:25:11.031007980Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:25:11.031061 containerd[2108]: time="2025-12-12T17:25:11.031051044Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:25:11.031117 containerd[2108]: time="2025-12-12T17:25:11.031104452Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:25:11.031162 containerd[2108]: time="2025-12-12T17:25:11.031150396Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:25:11.031202 containerd[2108]: time="2025-12-12T17:25:11.031190444Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:25:11.031239 containerd[2108]: time="2025-12-12T17:25:11.031228556Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:25:11.031303 containerd[2108]: time="2025-12-12T17:25:11.031289636Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:25:11.031466 containerd[2108]: time="2025-12-12T17:25:11.031445676Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:25:11.031536 containerd[2108]: time="2025-12-12T17:25:11.031523828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:25:11.031594 containerd[2108]: time="2025-12-12T17:25:11.031581740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:25:11.031647 containerd[2108]: time="2025-12-12T17:25:11.031635068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:25:11.031699 containerd[2108]: time="2025-12-12T17:25:11.031687212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:25:11.031744 containerd[2108]: time="2025-12-12T17:25:11.031733500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:25:11.031794 containerd[2108]: time="2025-12-12T17:25:11.031782924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:25:11.031837 containerd[2108]: time="2025-12-12T17:25:11.031827052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:25:11.031905 containerd[2108]: time="2025-12-12T17:25:11.031892260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:25:11.031980 containerd[2108]: time="2025-12-12T17:25:11.031967420Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:25:11.032024 containerd[2108]: time="2025-12-12T17:25:11.032012292Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:25:11.032079 containerd[2108]: time="2025-12-12T17:25:11.032066812Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:25:11.032156 containerd[2108]: time="2025-12-12T17:25:11.032143740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:25:11.032199 containerd[2108]: time="2025-12-12T17:25:11.032190780Z" level=info msg="Start snapshots syncer" Dec 12 17:25:11.032270 containerd[2108]: time="2025-12-12T17:25:11.032259788Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:25:11.032621 containerd[2108]: time="2025-12-12T17:25:11.032584980Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:25:11.032778 containerd[2108]: time="2025-12-12T17:25:11.032760156Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:25:11.032900 containerd[2108]: time="2025-12-12T17:25:11.032885932Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:25:11.033110 containerd[2108]: time="2025-12-12T17:25:11.033073068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:25:11.033182 containerd[2108]: time="2025-12-12T17:25:11.033168844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:25:11.033225 containerd[2108]: time="2025-12-12T17:25:11.033213860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:25:11.033276 containerd[2108]: time="2025-12-12T17:25:11.033264876Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:25:11.033323 containerd[2108]: time="2025-12-12T17:25:11.033311748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:25:11.033376 containerd[2108]: time="2025-12-12T17:25:11.033359324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:25:11.033440 containerd[2108]: time="2025-12-12T17:25:11.033428084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:25:11.033483 containerd[2108]: time="2025-12-12T17:25:11.033472108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:25:11.033524 containerd[2108]: time="2025-12-12T17:25:11.033514340Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:25:11.033580 containerd[2108]: time="2025-12-12T17:25:11.033570924Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033656476Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033670572Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033677964Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033683540Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033692268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033700364Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033708756Z" level=info msg="runtime interface created" Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033711940Z" level=info msg="created NRI interface" Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033717196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033725972Z" level=info msg="Connect containerd service" Dec 12 17:25:11.033766 containerd[2108]: time="2025-12-12T17:25:11.033743140Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:25:11.034878 containerd[2108]: time="2025-12-12T17:25:11.034642844Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:25:11.075267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:11.087152 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:11.418991 kubelet[2254]: E1212 17:25:11.418828 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:11.420664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:11.420777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:11.421146 systemd[1]: kubelet.service: Consumed 542ms CPU time, 256.3M memory peak. Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793024212Z" level=info msg="Start subscribing containerd event" Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793094668Z" level=info msg="Start recovering state" Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793195556Z" level=info msg="Start event monitor" Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793202788Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793206436Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793235764Z" level=info msg="Start streaming server" Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793242628Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793247404Z" level=info msg="runtime interface starting up..." Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793251316Z" level=info msg="starting plugins..." Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793255100Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793262308Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:25:11.793934 containerd[2108]: time="2025-12-12T17:25:11.793380412Z" level=info msg="containerd successfully booted in 0.788798s" Dec 12 17:25:11.793686 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:25:11.800434 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:25:11.806288 systemd[1]: Startup finished in 5.422s (kernel) + 18.801s (initrd) + 25.241s (userspace) = 49.465s. Dec 12 17:25:12.767370 login[2224]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:12.769066 login[2225]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:12.774443 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:25:12.775287 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:25:12.780592 systemd-logind[2072]: New session 2 of user core. Dec 12 17:25:12.782878 systemd-logind[2072]: New session 1 of user core. Dec 12 17:25:12.791971 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:25:12.794158 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:25:12.801568 (systemd)[2272]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:25:12.805066 systemd-logind[2072]: New session c1 of user core. Dec 12 17:25:12.990732 systemd[2272]: Queued start job for default target default.target. Dec 12 17:25:12.996661 systemd[2272]: Created slice app.slice - User Application Slice. Dec 12 17:25:12.996691 systemd[2272]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:25:12.996701 systemd[2272]: Reached target paths.target - Paths. Dec 12 17:25:12.996743 systemd[2272]: Reached target timers.target - Timers. Dec 12 17:25:12.997837 systemd[2272]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:25:12.999474 systemd[2272]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:25:13.005252 systemd[2272]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:25:13.005408 systemd[2272]: Reached target sockets.target - Sockets. Dec 12 17:25:13.009675 systemd[2272]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:25:13.009834 systemd[2272]: Reached target basic.target - Basic System. Dec 12 17:25:13.010037 systemd[2272]: Reached target default.target - Main User Target. Dec 12 17:25:13.010134 systemd[2272]: Startup finished in 198ms. Dec 12 17:25:13.010151 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:25:13.013975 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:25:13.014504 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:25:14.823584 waagent[2223]: 2025-12-12T17:25:14.823504Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 12 17:25:14.828344 waagent[2223]: 2025-12-12T17:25:14.828276Z INFO Daemon Daemon OS: flatcar 4515.1.0 Dec 12 17:25:14.831820 waagent[2223]: 2025-12-12T17:25:14.831778Z INFO Daemon Daemon Python: 3.11.13 Dec 12 17:25:14.838068 waagent[2223]: 2025-12-12T17:25:14.835976Z INFO Daemon Daemon Run daemon Dec 12 17:25:14.839127 waagent[2223]: 2025-12-12T17:25:14.839087Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Dec 12 17:25:14.845685 waagent[2223]: 2025-12-12T17:25:14.845642Z INFO Daemon Daemon Using waagent for provisioning Dec 12 17:25:14.849792 waagent[2223]: 2025-12-12T17:25:14.849752Z INFO Daemon Daemon Activate resource disk Dec 12 17:25:14.853311 waagent[2223]: 2025-12-12T17:25:14.853271Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 12 17:25:14.862046 waagent[2223]: 2025-12-12T17:25:14.861999Z INFO Daemon Daemon Found device: None Dec 12 17:25:14.865879 waagent[2223]: 2025-12-12T17:25:14.865836Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 12 17:25:14.872646 waagent[2223]: 2025-12-12T17:25:14.872614Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 12 17:25:14.882112 waagent[2223]: 2025-12-12T17:25:14.882067Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 12 17:25:14.886403 waagent[2223]: 2025-12-12T17:25:14.886371Z INFO Daemon Daemon Running default provisioning handler Dec 12 17:25:14.895761 waagent[2223]: 2025-12-12T17:25:14.895331Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 12 17:25:14.905550 waagent[2223]: 2025-12-12T17:25:14.905507Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 12 17:25:14.912918 waagent[2223]: 2025-12-12T17:25:14.912877Z INFO Daemon Daemon cloud-init is enabled: False Dec 12 17:25:14.916696 waagent[2223]: 2025-12-12T17:25:14.916667Z INFO Daemon Daemon Copying ovf-env.xml Dec 12 17:25:15.009022 waagent[2223]: 2025-12-12T17:25:15.008942Z INFO Daemon Daemon Successfully mounted dvd Dec 12 17:25:15.058466 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 12 17:25:15.060275 waagent[2223]: 2025-12-12T17:25:15.060213Z INFO Daemon Daemon Detect protocol endpoint Dec 12 17:25:15.064495 waagent[2223]: 2025-12-12T17:25:15.064443Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 12 17:25:15.069517 waagent[2223]: 2025-12-12T17:25:15.069473Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 12 17:25:15.074503 waagent[2223]: 2025-12-12T17:25:15.074434Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 12 17:25:15.078419 waagent[2223]: 2025-12-12T17:25:15.078385Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 12 17:25:15.082148 waagent[2223]: 2025-12-12T17:25:15.082117Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 12 17:25:15.199292 waagent[2223]: 2025-12-12T17:25:15.199250Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 12 17:25:15.204661 waagent[2223]: 2025-12-12T17:25:15.204639Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 12 17:25:15.208578 waagent[2223]: 2025-12-12T17:25:15.208552Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 12 17:25:15.503904 waagent[2223]: 2025-12-12T17:25:15.503647Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 12 17:25:15.508467 waagent[2223]: 2025-12-12T17:25:15.508419Z INFO Daemon Daemon Forcing an update of the goal state. Dec 12 17:25:15.516022 waagent[2223]: 2025-12-12T17:25:15.515984Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 12 17:25:15.551751 waagent[2223]: 2025-12-12T17:25:15.551711Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 12 17:25:15.556290 waagent[2223]: 2025-12-12T17:25:15.556246Z INFO Daemon Dec 12 17:25:15.559260 waagent[2223]: 2025-12-12T17:25:15.559216Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3b7562d9-1b3f-4ab6-8f93-0f841ec0e96a eTag: 13539441095574153608 source: Fabric] Dec 12 17:25:15.568373 waagent[2223]: 2025-12-12T17:25:15.568334Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 12 17:25:15.573988 waagent[2223]: 2025-12-12T17:25:15.573951Z INFO Daemon Dec 12 17:25:15.577211 waagent[2223]: 2025-12-12T17:25:15.577178Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 12 17:25:15.588736 waagent[2223]: 2025-12-12T17:25:15.588697Z INFO Daemon Daemon Downloading artifacts profile blob Dec 12 17:25:15.649158 waagent[2223]: 2025-12-12T17:25:15.649089Z INFO Daemon Downloaded certificate {'thumbprint': '43757DA8236030CC0D5231AB18DB60D31354BBA9', 'hasPrivateKey': True} Dec 12 17:25:15.656627 waagent[2223]: 2025-12-12T17:25:15.656586Z INFO Daemon Fetch goal state completed Dec 12 17:25:15.666627 waagent[2223]: 2025-12-12T17:25:15.666575Z INFO Daemon Daemon Starting provisioning Dec 12 17:25:15.671026 waagent[2223]: 2025-12-12T17:25:15.670987Z INFO Daemon Daemon Handle ovf-env.xml. Dec 12 17:25:15.674576 waagent[2223]: 2025-12-12T17:25:15.674546Z INFO Daemon Daemon Set hostname [ci-4515.1.0-a-74f46d5ce1] Dec 12 17:25:15.827237 waagent[2223]: 2025-12-12T17:25:15.827173Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-a-74f46d5ce1] Dec 12 17:25:15.832025 waagent[2223]: 2025-12-12T17:25:15.831975Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 12 17:25:15.836571 waagent[2223]: 2025-12-12T17:25:15.836533Z INFO Daemon Daemon Primary interface is [eth0] Dec 12 17:25:15.846568 systemd-networkd[1689]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:25:15.846575 systemd-networkd[1689]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:25:15.846660 systemd-networkd[1689]: eth0: DHCP lease lost Dec 12 17:25:15.857441 waagent[2223]: 2025-12-12T17:25:15.857106Z INFO Daemon Daemon Create user account if not exists Dec 12 17:25:15.861583 waagent[2223]: 2025-12-12T17:25:15.861526Z INFO Daemon Daemon User core already exists, skip useradd Dec 12 17:25:15.865893 waagent[2223]: 2025-12-12T17:25:15.865833Z INFO Daemon Daemon Configure sudoer Dec 12 17:25:15.869933 systemd-networkd[1689]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 12 17:25:15.873529 waagent[2223]: 2025-12-12T17:25:15.873471Z INFO Daemon Daemon Configure sshd Dec 12 17:25:15.880098 waagent[2223]: 2025-12-12T17:25:15.880048Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 12 17:25:15.889959 waagent[2223]: 2025-12-12T17:25:15.889917Z INFO Daemon Daemon Deploy ssh public key. Dec 12 17:25:16.985940 waagent[2223]: 2025-12-12T17:25:16.985890Z INFO Daemon Daemon Provisioning complete Dec 12 17:25:17.000330 waagent[2223]: 2025-12-12T17:25:17.000288Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 12 17:25:17.004795 waagent[2223]: 2025-12-12T17:25:17.004758Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 12 17:25:17.011519 waagent[2223]: 2025-12-12T17:25:17.011492Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 12 17:25:17.113197 waagent[2325]: 2025-12-12T17:25:17.113110Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 12 17:25:17.113476 waagent[2325]: 2025-12-12T17:25:17.113256Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Dec 12 17:25:17.113476 waagent[2325]: 2025-12-12T17:25:17.113296Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 12 17:25:17.113476 waagent[2325]: 2025-12-12T17:25:17.113333Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 12 17:25:17.204354 waagent[2325]: 2025-12-12T17:25:17.204274Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 12 17:25:17.204518 waagent[2325]: 2025-12-12T17:25:17.204488Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 17:25:17.204554 waagent[2325]: 2025-12-12T17:25:17.204542Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 17:25:17.210723 waagent[2325]: 2025-12-12T17:25:17.210671Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 12 17:25:17.215926 waagent[2325]: 2025-12-12T17:25:17.215890Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 12 17:25:17.216306 waagent[2325]: 2025-12-12T17:25:17.216272Z INFO ExtHandler Dec 12 17:25:17.216361 waagent[2325]: 2025-12-12T17:25:17.216340Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c54d8e6b-e1b8-4e29-a179-d0bb37714d8e eTag: 13539441095574153608 source: Fabric] Dec 12 17:25:17.216586 waagent[2325]: 2025-12-12T17:25:17.216559Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 12 17:25:17.217034 waagent[2325]: 2025-12-12T17:25:17.217001Z INFO ExtHandler Dec 12 17:25:17.217077 waagent[2325]: 2025-12-12T17:25:17.217059Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 12 17:25:17.222134 waagent[2325]: 2025-12-12T17:25:17.222105Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 12 17:25:17.279058 waagent[2325]: 2025-12-12T17:25:17.278946Z INFO ExtHandler Downloaded certificate {'thumbprint': '43757DA8236030CC0D5231AB18DB60D31354BBA9', 'hasPrivateKey': True} Dec 12 17:25:17.279400 waagent[2325]: 2025-12-12T17:25:17.279362Z INFO ExtHandler Fetch goal state completed Dec 12 17:25:17.292555 waagent[2325]: 2025-12-12T17:25:17.292497Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Dec 12 17:25:17.295971 waagent[2325]: 2025-12-12T17:25:17.295924Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2325 Dec 12 17:25:17.296080 waagent[2325]: 2025-12-12T17:25:17.296050Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 12 17:25:17.296334 waagent[2325]: 2025-12-12T17:25:17.296304Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 12 17:25:17.297486 waagent[2325]: 2025-12-12T17:25:17.297449Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 12 17:25:17.297809 waagent[2325]: 2025-12-12T17:25:17.297778Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 12 17:25:17.297956 waagent[2325]: 2025-12-12T17:25:17.297928Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 12 17:25:17.298390 waagent[2325]: 2025-12-12T17:25:17.298357Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 12 17:25:17.482485 waagent[2325]: 2025-12-12T17:25:17.482446Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 12 17:25:17.482670 waagent[2325]: 2025-12-12T17:25:17.482640Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 12 17:25:17.487831 waagent[2325]: 2025-12-12T17:25:17.487799Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 12 17:25:17.492563 systemd[1]: Reload requested from client PID 2342 ('systemctl') (unit waagent.service)... Dec 12 17:25:17.492579 systemd[1]: Reloading... Dec 12 17:25:17.580002 zram_generator::config[2408]: No configuration found. Dec 12 17:25:17.707933 systemd[1]: Reloading finished in 215 ms. Dec 12 17:25:17.723876 waagent[2325]: 2025-12-12T17:25:17.721771Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 12 17:25:17.723876 waagent[2325]: 2025-12-12T17:25:17.721933Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 12 17:25:18.285435 waagent[2325]: 2025-12-12T17:25:18.284580Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 12 17:25:18.285435 waagent[2325]: 2025-12-12T17:25:18.284929Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 12 17:25:18.285792 waagent[2325]: 2025-12-12T17:25:18.285747Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 17:25:18.285839 waagent[2325]: 2025-12-12T17:25:18.285814Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 17:25:18.285943 waagent[2325]: 2025-12-12T17:25:18.285898Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 12 17:25:18.286162 waagent[2325]: 2025-12-12T17:25:18.286127Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 12 17:25:18.286581 waagent[2325]: 2025-12-12T17:25:18.286543Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 12 17:25:18.286695 waagent[2325]: 2025-12-12T17:25:18.286662Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 17:25:18.286925 waagent[2325]: 2025-12-12T17:25:18.286889Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 12 17:25:18.286988 waagent[2325]: 2025-12-12T17:25:18.286957Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 12 17:25:18.286988 waagent[2325]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 12 17:25:18.286988 waagent[2325]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 12 17:25:18.286988 waagent[2325]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 12 17:25:18.286988 waagent[2325]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 12 17:25:18.286988 waagent[2325]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 12 17:25:18.286988 waagent[2325]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 12 17:25:18.287167 waagent[2325]: 2025-12-12T17:25:18.287139Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 12 17:25:18.287406 waagent[2325]: 2025-12-12T17:25:18.287368Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 12 17:25:18.287699 waagent[2325]: 2025-12-12T17:25:18.287672Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 12 17:25:18.287782 waagent[2325]: 2025-12-12T17:25:18.287741Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 12 17:25:18.288291 waagent[2325]: 2025-12-12T17:25:18.288260Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 17:25:18.288814 waagent[2325]: 2025-12-12T17:25:18.288786Z INFO EnvHandler ExtHandler Configure routes Dec 12 17:25:18.289024 waagent[2325]: 2025-12-12T17:25:18.288941Z INFO EnvHandler ExtHandler Gateway:None Dec 12 17:25:18.289024 waagent[2325]: 2025-12-12T17:25:18.288985Z INFO EnvHandler ExtHandler Routes:None Dec 12 17:25:18.293448 waagent[2325]: 2025-12-12T17:25:18.293411Z INFO ExtHandler ExtHandler Dec 12 17:25:18.293738 waagent[2325]: 2025-12-12T17:25:18.293711Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d99fcd74-1a00-4c90-abdc-b32f777a29c8 correlation 03e3ac83-fca3-4a28-89c5-177bc6bbe617 created: 2025-12-12T17:23:47.122345Z] Dec 12 17:25:18.294315 waagent[2325]: 2025-12-12T17:25:18.294277Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 12 17:25:18.294723 waagent[2325]: 2025-12-12T17:25:18.294691Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 12 17:25:18.393962 waagent[2325]: 2025-12-12T17:25:18.393899Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 12 17:25:18.393962 waagent[2325]: Try `iptables -h' or 'iptables --help' for more information.) Dec 12 17:25:18.394371 waagent[2325]: 2025-12-12T17:25:18.394335Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B0135E74-B769-44C6-AF2A-D601C891A190;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 12 17:25:18.430313 waagent[2325]: 2025-12-12T17:25:18.430235Z INFO MonitorHandler ExtHandler Network interfaces: Dec 12 17:25:18.430313 waagent[2325]: Executing ['ip', '-a', '-o', 'link']: Dec 12 17:25:18.430313 waagent[2325]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 12 17:25:18.430313 waagent[2325]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c2:24:d7 brd ff:ff:ff:ff:ff:ff\ altname enx002248c224d7 Dec 12 17:25:18.430313 waagent[2325]: 3: enP4857s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c2:24:d7 brd ff:ff:ff:ff:ff:ff\ altname enP4857p0s2 Dec 12 17:25:18.430313 waagent[2325]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 12 17:25:18.430313 waagent[2325]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 12 17:25:18.430313 waagent[2325]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 12 17:25:18.430313 waagent[2325]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 12 17:25:18.430313 waagent[2325]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 12 17:25:18.430313 waagent[2325]: 2: eth0 inet6 fe80::222:48ff:fec2:24d7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 12 17:25:18.559529 waagent[2325]: 2025-12-12T17:25:18.558716Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 12 17:25:18.559529 waagent[2325]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:25:18.559529 waagent[2325]: pkts bytes target prot opt in out source destination Dec 12 17:25:18.559529 waagent[2325]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:25:18.559529 waagent[2325]: pkts bytes target prot opt in out source destination Dec 12 17:25:18.559529 waagent[2325]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:25:18.559529 waagent[2325]: pkts bytes target prot opt in out source destination Dec 12 17:25:18.559529 waagent[2325]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 12 17:25:18.559529 waagent[2325]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 12 17:25:18.559529 waagent[2325]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 12 17:25:18.561198 waagent[2325]: 2025-12-12T17:25:18.561162Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 12 17:25:18.561198 waagent[2325]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:25:18.561198 waagent[2325]: pkts bytes target prot opt in out source destination Dec 12 17:25:18.561198 waagent[2325]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:25:18.561198 waagent[2325]: pkts bytes target prot opt in out source destination Dec 12 17:25:18.561198 waagent[2325]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:25:18.561198 waagent[2325]: pkts bytes target prot opt in out source destination Dec 12 17:25:18.561198 waagent[2325]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 12 17:25:18.561198 waagent[2325]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 12 17:25:18.561198 waagent[2325]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 12 17:25:18.561595 waagent[2325]: 2025-12-12T17:25:18.561570Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 12 17:25:21.613736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:25:21.615129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:21.726064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:21.739132 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:21.833091 kubelet[2479]: E1212 17:25:21.833026 2479 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:21.835989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:21.836225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:21.836911 systemd[1]: kubelet.service: Consumed 179ms CPU time, 105.6M memory peak. Dec 12 17:25:31.863761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:25:31.865269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:31.962768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:31.973112 (kubelet)[2494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:32.086701 kubelet[2494]: E1212 17:25:32.086628 2494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:32.088924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:32.089148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:32.089719 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107.6M memory peak. Dec 12 17:25:33.774840 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:25:33.778058 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:39968.service - OpenSSH per-connection server daemon (10.200.16.10:39968). Dec 12 17:25:33.840953 chronyd[2050]: Selected source PHC0 Dec 12 17:25:34.483587 sshd[2502]: Accepted publickey for core from 10.200.16.10 port 39968 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:25:34.484683 sshd-session[2502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:34.488750 systemd-logind[2072]: New session 3 of user core. Dec 12 17:25:34.496181 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:25:34.783108 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:39982.service - OpenSSH per-connection server daemon (10.200.16.10:39982). Dec 12 17:25:35.204053 sshd[2508]: Accepted publickey for core from 10.200.16.10 port 39982 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:25:35.205173 sshd-session[2508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:35.209230 systemd-logind[2072]: New session 4 of user core. Dec 12 17:25:35.217210 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:25:35.438643 sshd[2511]: Connection closed by 10.200.16.10 port 39982 Dec 12 17:25:35.438550 sshd-session[2508]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:35.441992 systemd-logind[2072]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:25:35.442289 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:39982.service: Deactivated successfully. Dec 12 17:25:35.445302 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:25:35.446937 systemd-logind[2072]: Removed session 4. Dec 12 17:25:35.522609 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:39998.service - OpenSSH per-connection server daemon (10.200.16.10:39998). Dec 12 17:25:35.915795 sshd[2517]: Accepted publickey for core from 10.200.16.10 port 39998 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:25:35.916893 sshd-session[2517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:35.920820 systemd-logind[2072]: New session 5 of user core. Dec 12 17:25:35.931218 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:25:36.127926 sshd[2520]: Connection closed by 10.200.16.10 port 39998 Dec 12 17:25:36.128480 sshd-session[2517]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:36.131808 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:39998.service: Deactivated successfully. Dec 12 17:25:36.133283 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:25:36.133907 systemd-logind[2072]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:25:36.135010 systemd-logind[2072]: Removed session 5. Dec 12 17:25:36.215465 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:40008.service - OpenSSH per-connection server daemon (10.200.16.10:40008). Dec 12 17:25:36.631915 sshd[2526]: Accepted publickey for core from 10.200.16.10 port 40008 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:25:36.633008 sshd-session[2526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:36.637017 systemd-logind[2072]: New session 6 of user core. Dec 12 17:25:36.646185 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:25:36.865744 sshd[2529]: Connection closed by 10.200.16.10 port 40008 Dec 12 17:25:36.866335 sshd-session[2526]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:36.869727 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:40008.service: Deactivated successfully. Dec 12 17:25:36.871295 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:25:36.871981 systemd-logind[2072]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:25:36.873208 systemd-logind[2072]: Removed session 6. Dec 12 17:25:36.973603 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:40010.service - OpenSSH per-connection server daemon (10.200.16.10:40010). Dec 12 17:25:37.399725 sshd[2535]: Accepted publickey for core from 10.200.16.10 port 40010 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:25:37.400798 sshd-session[2535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:37.404492 systemd-logind[2072]: New session 7 of user core. Dec 12 17:25:37.411006 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:25:37.814025 sudo[2539]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:25:37.814244 sudo[2539]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:37.876310 sudo[2539]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:37.954249 sshd[2538]: Connection closed by 10.200.16.10 port 40010 Dec 12 17:25:37.954113 sshd-session[2535]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:37.957872 systemd-logind[2072]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:25:37.958067 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:40010.service: Deactivated successfully. Dec 12 17:25:37.959452 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:25:37.962132 systemd-logind[2072]: Removed session 7. Dec 12 17:25:38.041732 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:40020.service - OpenSSH per-connection server daemon (10.200.16.10:40020). Dec 12 17:25:38.460819 sshd[2545]: Accepted publickey for core from 10.200.16.10 port 40020 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:25:38.461913 sshd-session[2545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:38.465710 systemd-logind[2072]: New session 8 of user core. Dec 12 17:25:38.476993 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:25:38.619571 sudo[2550]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:25:38.619780 sudo[2550]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:38.626803 sudo[2550]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:38.631415 sudo[2549]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:25:38.631619 sudo[2549]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:38.639577 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:25:38.670000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:25:38.677056 augenrules[2572]: No rules Dec 12 17:25:38.680375 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:25:38.680594 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:25:38.670000 audit[2572]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff1829f00 a2=420 a3=0 items=0 ppid=2553 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:38.698826 kernel: audit: type=1305 audit(1765560338.670:261): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:25:38.698929 kernel: audit: type=1300 audit(1765560338.670:261): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff1829f00 a2=420 a3=0 items=0 ppid=2553 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:38.670000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:25:38.706226 kernel: audit: type=1327 audit(1765560338.670:261): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:25:38.699080 sudo[2549]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:38.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.719062 kernel: audit: type=1130 audit(1765560338.679:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.719133 kernel: audit: type=1131 audit(1765560338.679:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.698000 audit[2549]: USER_END pid=2549 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.745385 kernel: audit: type=1106 audit(1765560338.698:264): pid=2549 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.745478 kernel: audit: type=1104 audit(1765560338.698:265): pid=2549 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.698000 audit[2549]: CRED_DISP pid=2549 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.782771 sshd[2548]: Connection closed by 10.200.16.10 port 40020 Dec 12 17:25:38.783076 sshd-session[2545]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:38.783000 audit[2545]: USER_END pid=2545 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:25:38.786721 systemd-logind[2072]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:25:38.788247 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:40020.service: Deactivated successfully. Dec 12 17:25:38.790658 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:25:38.792988 systemd-logind[2072]: Removed session 8. Dec 12 17:25:38.783000 audit[2545]: CRED_DISP pid=2545 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:25:38.816535 kernel: audit: type=1106 audit(1765560338.783:266): pid=2545 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:25:38.816620 kernel: audit: type=1104 audit(1765560338.783:267): pid=2545 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:25:38.816641 kernel: audit: type=1131 audit(1765560338.788:268): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.11:22-10.200.16.10:40020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.11:22-10.200.16.10:40020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.11:22-10.200.16.10:40022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:38.869188 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:40022.service - OpenSSH per-connection server daemon (10.200.16.10:40022). Dec 12 17:25:39.255000 audit[2581]: USER_ACCT pid=2581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:25:39.256847 sshd[2581]: Accepted publickey for core from 10.200.16.10 port 40022 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:25:39.257000 audit[2581]: CRED_ACQ pid=2581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:25:39.257000 audit[2581]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcfed7640 a2=3 a3=0 items=0 ppid=1 pid=2581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:39.257000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:25:39.258279 sshd-session[2581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:39.262038 systemd-logind[2072]: New session 9 of user core. Dec 12 17:25:39.272003 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:25:39.273000 audit[2581]: USER_START pid=2581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:25:39.274000 audit[2584]: CRED_ACQ pid=2584 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:25:39.403000 audit[2585]: USER_ACCT pid=2585 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:39.404830 sudo[2585]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:25:39.405059 sudo[2585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:39.404000 audit[2585]: CRED_REFR pid=2585 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:39.406000 audit[2585]: USER_START pid=2585 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:41.831317 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:25:41.842102 (dockerd)[2602]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:25:42.091755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 17:25:42.093340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:42.810997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:42.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:42.821356 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:42.854666 kubelet[2615]: E1212 17:25:42.854607 2615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:42.856796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:42.857150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:42.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:25:42.857901 systemd[1]: kubelet.service: Consumed 111ms CPU time, 106.8M memory peak. Dec 12 17:25:43.958994 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 12 17:25:44.110591 dockerd[2602]: time="2025-12-12T17:25:44.110539471Z" level=info msg="Starting up" Dec 12 17:25:44.111557 dockerd[2602]: time="2025-12-12T17:25:44.111535868Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:25:44.120169 dockerd[2602]: time="2025-12-12T17:25:44.120142599Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:25:44.180151 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1848231601-merged.mount: Deactivated successfully. Dec 12 17:25:44.220781 dockerd[2602]: time="2025-12-12T17:25:44.220198358Z" level=info msg="Loading containers: start." Dec 12 17:25:44.271877 kernel: Initializing XFRM netlink socket Dec 12 17:25:44.328000 audit[2663]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.332066 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 12 17:25:44.332109 kernel: audit: type=1325 audit(1765560344.328:280): table=nat:5 family=2 entries=2 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.328000 audit[2663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe8879b70 a2=0 a3=0 items=0 ppid=2602 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.359420 kernel: audit: type=1300 audit(1765560344.328:280): arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe8879b70 a2=0 a3=0 items=0 ppid=2602 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.328000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:25:44.368178 kernel: audit: type=1327 audit(1765560344.328:280): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:25:44.331000 audit[2665]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.377350 kernel: audit: type=1325 audit(1765560344.331:281): table=filter:6 family=2 entries=2 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.331000 audit[2665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe7b56890 a2=0 a3=0 items=0 ppid=2602 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.395494 kernel: audit: type=1300 audit(1765560344.331:281): arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe7b56890 a2=0 a3=0 items=0 ppid=2602 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.331000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:25:44.404727 kernel: audit: type=1327 audit(1765560344.331:281): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:25:44.331000 audit[2667]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.414387 kernel: audit: type=1325 audit(1765560344.331:282): table=filter:7 family=2 entries=1 op=nft_register_chain pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.331000 audit[2667]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb38f400 a2=0 a3=0 items=0 ppid=2602 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.432535 kernel: audit: type=1300 audit(1765560344.331:282): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb38f400 a2=0 a3=0 items=0 ppid=2602 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.331000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:25:44.442176 kernel: audit: type=1327 audit(1765560344.331:282): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:25:44.337000 audit[2669]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.451831 kernel: audit: type=1325 audit(1765560344.337:283): table=filter:8 family=2 entries=1 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.337000 audit[2669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9e10ee0 a2=0 a3=0 items=0 ppid=2602 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.337000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:25:44.337000 audit[2671]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.337000 audit[2671]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd63a7eb0 a2=0 a3=0 items=0 ppid=2602 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.337000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:25:44.340000 audit[2673]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.340000 audit[2673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdc4ca3e0 a2=0 a3=0 items=0 ppid=2602 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:25:44.340000 audit[2675]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2675 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.340000 audit[2675]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff14ef250 a2=0 a3=0 items=0 ppid=2602 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:25:44.340000 audit[2677]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.340000 audit[2677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc6e70470 a2=0 a3=0 items=0 ppid=2602 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:25:44.457000 audit[2680]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.457000 audit[2680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd4a54fb0 a2=0 a3=0 items=0 ppid=2602 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.457000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:25:44.458000 audit[2682]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.458000 audit[2682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdf1c5740 a2=0 a3=0 items=0 ppid=2602 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.458000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:25:44.460000 audit[2684]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.460000 audit[2684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffceeb9c10 a2=0 a3=0 items=0 ppid=2602 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.460000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:25:44.461000 audit[2686]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.461000 audit[2686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffdb147e70 a2=0 a3=0 items=0 ppid=2602 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:25:44.463000 audit[2688]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.463000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffdc25a800 a2=0 a3=0 items=0 ppid=2602 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:25:44.567000 audit[2718]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.567000 audit[2718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd20ffaa0 a2=0 a3=0 items=0 ppid=2602 pid=2718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.567000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:25:44.570000 audit[2720]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.570000 audit[2720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc0ee5ad0 a2=0 a3=0 items=0 ppid=2602 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.570000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:25:44.571000 audit[2722]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.571000 audit[2722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffcd93470 a2=0 a3=0 items=0 ppid=2602 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:25:44.573000 audit[2724]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.573000 audit[2724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdac10540 a2=0 a3=0 items=0 ppid=2602 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.573000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:25:44.574000 audit[2726]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2726 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.574000 audit[2726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1a33bf0 a2=0 a3=0 items=0 ppid=2602 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:25:44.576000 audit[2728]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2728 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.576000 audit[2728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc29c3330 a2=0 a3=0 items=0 ppid=2602 pid=2728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.576000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:25:44.577000 audit[2730]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.577000 audit[2730]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd0445ed0 a2=0 a3=0 items=0 ppid=2602 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.577000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:25:44.579000 audit[2732]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.579000 audit[2732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff1a9fb00 a2=0 a3=0 items=0 ppid=2602 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.579000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:25:44.581000 audit[2734]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.581000 audit[2734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffc24628f0 a2=0 a3=0 items=0 ppid=2602 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.581000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:25:44.582000 audit[2736]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.582000 audit[2736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff1693d30 a2=0 a3=0 items=0 ppid=2602 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.582000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:25:44.584000 audit[2738]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.584000 audit[2738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc53fd230 a2=0 a3=0 items=0 ppid=2602 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.584000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:25:44.586000 audit[2740]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.586000 audit[2740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffffe17cd0 a2=0 a3=0 items=0 ppid=2602 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:25:44.587000 audit[2742]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.587000 audit[2742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc571c560 a2=0 a3=0 items=0 ppid=2602 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.587000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:25:44.591000 audit[2747]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.591000 audit[2747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd4b6a920 a2=0 a3=0 items=0 ppid=2602 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.591000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:25:44.593000 audit[2749]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.593000 audit[2749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffb1af500 a2=0 a3=0 items=0 ppid=2602 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.593000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:25:44.594000 audit[2751]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2751 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.594000 audit[2751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdd8535e0 a2=0 a3=0 items=0 ppid=2602 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.594000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:25:44.596000 audit[2753]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2753 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.596000 audit[2753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff4b2b7e0 a2=0 a3=0 items=0 ppid=2602 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.596000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:25:44.598000 audit[2755]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.598000 audit[2755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd921c670 a2=0 a3=0 items=0 ppid=2602 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:25:44.599000 audit[2757]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:44.599000 audit[2757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd0ddc120 a2=0 a3=0 items=0 ppid=2602 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.599000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:25:44.702000 audit[2762]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.702000 audit[2762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd321a3a0 a2=0 a3=0 items=0 ppid=2602 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:25:44.704000 audit[2764]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.704000 audit[2764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffeb7286b0 a2=0 a3=0 items=0 ppid=2602 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.704000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:25:44.711000 audit[2772]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.711000 audit[2772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffcc69cf70 a2=0 a3=0 items=0 ppid=2602 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:25:44.715000 audit[2777]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.715000 audit[2777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffeb881940 a2=0 a3=0 items=0 ppid=2602 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:25:44.717000 audit[2779]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.717000 audit[2779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffcd22ece0 a2=0 a3=0 items=0 ppid=2602 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:25:44.718000 audit[2781]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.718000 audit[2781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcee7deb0 a2=0 a3=0 items=0 ppid=2602 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:25:44.720000 audit[2783]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.720000 audit[2783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd33f2ec0 a2=0 a3=0 items=0 ppid=2602 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:25:44.722000 audit[2785]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:44.722000 audit[2785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd681e490 a2=0 a3=0 items=0 ppid=2602 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:44.722000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:25:44.723722 systemd-networkd[1689]: docker0: Link UP Dec 12 17:25:44.741391 dockerd[2602]: time="2025-12-12T17:25:44.741352218Z" level=info msg="Loading containers: done." Dec 12 17:25:44.751306 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3517234296-merged.mount: Deactivated successfully. Dec 12 17:25:44.795007 dockerd[2602]: time="2025-12-12T17:25:44.794955756Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:25:44.795189 dockerd[2602]: time="2025-12-12T17:25:44.795053326Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:25:44.795189 dockerd[2602]: time="2025-12-12T17:25:44.795161328Z" level=info msg="Initializing buildkit" Dec 12 17:25:44.840589 dockerd[2602]: time="2025-12-12T17:25:44.840473638Z" level=info msg="Completed buildkit initialization" Dec 12 17:25:44.845510 dockerd[2602]: time="2025-12-12T17:25:44.845416637Z" level=info msg="Daemon has completed initialization" Dec 12 17:25:44.845669 dockerd[2602]: time="2025-12-12T17:25:44.845633409Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:25:44.845969 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:25:44.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:45.811138 containerd[2108]: time="2025-12-12T17:25:45.811084325Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 17:25:46.877676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695675173.mount: Deactivated successfully. Dec 12 17:25:48.169901 containerd[2108]: time="2025-12-12T17:25:48.169254203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:48.173327 containerd[2108]: time="2025-12-12T17:25:48.173151980Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=24835766" Dec 12 17:25:48.177071 containerd[2108]: time="2025-12-12T17:25:48.177044773Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:48.181639 containerd[2108]: time="2025-12-12T17:25:48.181588547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:48.182356 containerd[2108]: time="2025-12-12T17:25:48.182204800Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 2.370943479s" Dec 12 17:25:48.182356 containerd[2108]: time="2025-12-12T17:25:48.182235081Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 12 17:25:48.182985 containerd[2108]: time="2025-12-12T17:25:48.182956752Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 17:25:49.842693 containerd[2108]: time="2025-12-12T17:25:49.842031000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:49.845150 containerd[2108]: time="2025-12-12T17:25:49.845095535Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 12 17:25:49.848678 containerd[2108]: time="2025-12-12T17:25:49.848652761Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:49.854868 containerd[2108]: time="2025-12-12T17:25:49.854833602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:49.855568 containerd[2108]: time="2025-12-12T17:25:49.855539480Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.672552232s" Dec 12 17:25:49.855568 containerd[2108]: time="2025-12-12T17:25:49.855567641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 12 17:25:49.856459 containerd[2108]: time="2025-12-12T17:25:49.856431563Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 17:25:51.361818 containerd[2108]: time="2025-12-12T17:25:51.361160323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:51.364287 containerd[2108]: time="2025-12-12T17:25:51.364247676Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17610300" Dec 12 17:25:51.368841 containerd[2108]: time="2025-12-12T17:25:51.368811827Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:51.374015 containerd[2108]: time="2025-12-12T17:25:51.373990722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:51.374549 containerd[2108]: time="2025-12-12T17:25:51.374526912Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.518069709s" Dec 12 17:25:51.374654 containerd[2108]: time="2025-12-12T17:25:51.374640907Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 12 17:25:51.375228 containerd[2108]: time="2025-12-12T17:25:51.375154456Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 17:25:52.547022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898932266.mount: Deactivated successfully. Dec 12 17:25:52.836480 containerd[2108]: time="2025-12-12T17:25:52.836338540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:52.839696 containerd[2108]: time="2025-12-12T17:25:52.839517383Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=0" Dec 12 17:25:52.843060 containerd[2108]: time="2025-12-12T17:25:52.843029322Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:52.847348 containerd[2108]: time="2025-12-12T17:25:52.846923752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:52.847348 containerd[2108]: time="2025-12-12T17:25:52.847232384Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.471898523s" Dec 12 17:25:52.847348 containerd[2108]: time="2025-12-12T17:25:52.847261129Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 12 17:25:52.847884 containerd[2108]: time="2025-12-12T17:25:52.847845424Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 17:25:52.863474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 12 17:25:52.865109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:52.959086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:52.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:52.962864 kernel: kauditd_printk_skb: 111 callbacks suppressed Dec 12 17:25:52.962911 kernel: audit: type=1130 audit(1765560352.957:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:52.977735 (kubelet)[2906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:53.089256 kubelet[2906]: E1212 17:25:53.089114 2906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:53.091536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:53.091772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:53.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:25:53.093949 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107M memory peak. Dec 12 17:25:53.107902 kernel: audit: type=1131 audit(1765560353.092:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:25:54.008595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2398782492.mount: Deactivated successfully. Dec 12 17:25:55.340991 update_engine[2075]: I20251212 17:25:55.340919 2075 update_attempter.cc:509] Updating boot flags... Dec 12 17:25:55.751658 containerd[2108]: time="2025-12-12T17:25:55.751597923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:55.756202 containerd[2108]: time="2025-12-12T17:25:55.755996918Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16201880" Dec 12 17:25:55.759553 containerd[2108]: time="2025-12-12T17:25:55.759523402Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:55.763871 containerd[2108]: time="2025-12-12T17:25:55.763831050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:55.764567 containerd[2108]: time="2025-12-12T17:25:55.764421153Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.916419157s" Dec 12 17:25:55.764567 containerd[2108]: time="2025-12-12T17:25:55.764451282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 12 17:25:55.765012 containerd[2108]: time="2025-12-12T17:25:55.764982288Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:25:56.456977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1098160194.mount: Deactivated successfully. Dec 12 17:25:56.482850 containerd[2108]: time="2025-12-12T17:25:56.482805341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:25:56.485770 containerd[2108]: time="2025-12-12T17:25:56.485729329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:25:56.489737 containerd[2108]: time="2025-12-12T17:25:56.489694745Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:25:56.494414 containerd[2108]: time="2025-12-12T17:25:56.494373091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:25:56.494803 containerd[2108]: time="2025-12-12T17:25:56.494677659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 729.523367ms" Dec 12 17:25:56.494803 containerd[2108]: time="2025-12-12T17:25:56.494708868Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:25:56.495482 containerd[2108]: time="2025-12-12T17:25:56.495461527Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 17:25:57.235641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464162680.mount: Deactivated successfully. Dec 12 17:25:59.469168 containerd[2108]: time="2025-12-12T17:25:59.469112520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:59.473029 containerd[2108]: time="2025-12-12T17:25:59.472874516Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060488" Dec 12 17:25:59.476503 containerd[2108]: time="2025-12-12T17:25:59.476461666Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:59.483556 containerd[2108]: time="2025-12-12T17:25:59.483516409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:59.484397 containerd[2108]: time="2025-12-12T17:25:59.484257258Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.988695848s" Dec 12 17:25:59.484397 containerd[2108]: time="2025-12-12T17:25:59.484285147Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 12 17:26:01.714152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:01.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:01.714681 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107M memory peak. Dec 12 17:26:01.721095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:01.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:01.742677 kernel: audit: type=1130 audit(1765560361.713:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:01.742738 kernel: audit: type=1131 audit(1765560361.713:324): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:01.757826 systemd[1]: Reload requested from client PID 3170 ('systemctl') (unit session-9.scope)... Dec 12 17:26:01.757845 systemd[1]: Reloading... Dec 12 17:26:01.849883 zram_generator::config[3219]: No configuration found. Dec 12 17:26:02.014666 systemd[1]: Reloading finished in 255 ms. Dec 12 17:26:02.028000 audit: BPF prog-id=87 op=LOAD Dec 12 17:26:02.029000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:26:02.039537 kernel: audit: type=1334 audit(1765560362.028:325): prog-id=87 op=LOAD Dec 12 17:26:02.039608 kernel: audit: type=1334 audit(1765560362.029:326): prog-id=69 op=UNLOAD Dec 12 17:26:02.029000 audit: BPF prog-id=88 op=LOAD Dec 12 17:26:02.043364 kernel: audit: type=1334 audit(1765560362.029:327): prog-id=88 op=LOAD Dec 12 17:26:02.029000 audit: BPF prog-id=89 op=LOAD Dec 12 17:26:02.047316 kernel: audit: type=1334 audit(1765560362.029:328): prog-id=89 op=LOAD Dec 12 17:26:02.029000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:26:02.051728 kernel: audit: type=1334 audit(1765560362.029:329): prog-id=70 op=UNLOAD Dec 12 17:26:02.029000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:26:02.056034 kernel: audit: type=1334 audit(1765560362.029:330): prog-id=71 op=UNLOAD Dec 12 17:26:02.034000 audit: BPF prog-id=90 op=LOAD Dec 12 17:26:02.060406 kernel: audit: type=1334 audit(1765560362.034:331): prog-id=90 op=LOAD Dec 12 17:26:02.034000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:26:02.064596 kernel: audit: type=1334 audit(1765560362.034:332): prog-id=73 op=UNLOAD Dec 12 17:26:02.038000 audit: BPF prog-id=91 op=LOAD Dec 12 17:26:02.066000 audit: BPF prog-id=92 op=LOAD Dec 12 17:26:02.066000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:26:02.066000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:26:02.066000 audit: BPF prog-id=93 op=LOAD Dec 12 17:26:02.066000 audit: BPF prog-id=94 op=LOAD Dec 12 17:26:02.066000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:26:02.066000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:26:02.067000 audit: BPF prog-id=95 op=LOAD Dec 12 17:26:02.067000 audit: BPF prog-id=68 op=UNLOAD Dec 12 17:26:02.068000 audit: BPF prog-id=96 op=LOAD Dec 12 17:26:02.068000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:26:02.069000 audit: BPF prog-id=97 op=LOAD Dec 12 17:26:02.069000 audit: BPF prog-id=81 op=UNLOAD Dec 12 17:26:02.069000 audit: BPF prog-id=98 op=LOAD Dec 12 17:26:02.069000 audit: BPF prog-id=99 op=LOAD Dec 12 17:26:02.069000 audit: BPF prog-id=82 op=UNLOAD Dec 12 17:26:02.069000 audit: BPF prog-id=83 op=UNLOAD Dec 12 17:26:02.069000 audit: BPF prog-id=100 op=LOAD Dec 12 17:26:02.069000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:26:02.069000 audit: BPF prog-id=101 op=LOAD Dec 12 17:26:02.069000 audit: BPF prog-id=102 op=LOAD Dec 12 17:26:02.069000 audit: BPF prog-id=85 op=UNLOAD Dec 12 17:26:02.069000 audit: BPF prog-id=86 op=UNLOAD Dec 12 17:26:02.070000 audit: BPF prog-id=103 op=LOAD Dec 12 17:26:02.070000 audit: BPF prog-id=67 op=UNLOAD Dec 12 17:26:02.071000 audit: BPF prog-id=104 op=LOAD Dec 12 17:26:02.071000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:26:02.071000 audit: BPF prog-id=105 op=LOAD Dec 12 17:26:02.071000 audit: BPF prog-id=106 op=LOAD Dec 12 17:26:02.071000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:26:02.071000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:26:02.082268 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:26:02.082334 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:26:02.083924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:02.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:02.083977 systemd[1]: kubelet.service: Consumed 80ms CPU time, 95.3M memory peak. Dec 12 17:26:02.085464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:02.258557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:02.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:02.268109 (kubelet)[3286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:26:02.294972 kubelet[3286]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:02.294972 kubelet[3286]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:26:02.294972 kubelet[3286]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:02.295316 kubelet[3286]: I1212 17:26:02.295059 3286 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:26:02.868673 kubelet[3286]: I1212 17:26:02.868626 3286 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:26:02.868673 kubelet[3286]: I1212 17:26:02.868664 3286 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:26:02.868910 kubelet[3286]: I1212 17:26:02.868892 3286 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:26:02.887968 kubelet[3286]: E1212 17:26:02.887831 3286 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:26:02.888739 kubelet[3286]: I1212 17:26:02.888542 3286 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:26:02.894097 kubelet[3286]: I1212 17:26:02.894076 3286 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:26:02.896402 kubelet[3286]: I1212 17:26:02.896385 3286 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:26:02.897087 kubelet[3286]: I1212 17:26:02.897058 3286 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:26:02.897210 kubelet[3286]: I1212 17:26:02.897089 3286 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-74f46d5ce1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:26:02.897291 kubelet[3286]: I1212 17:26:02.897219 3286 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:26:02.897291 kubelet[3286]: I1212 17:26:02.897227 3286 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:26:02.897356 kubelet[3286]: I1212 17:26:02.897343 3286 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:02.899548 kubelet[3286]: I1212 17:26:02.899530 3286 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:26:02.899590 kubelet[3286]: I1212 17:26:02.899552 3286 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:26:02.899590 kubelet[3286]: I1212 17:26:02.899572 3286 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:26:02.899590 kubelet[3286]: I1212 17:26:02.899580 3286 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:26:02.905626 kubelet[3286]: W1212 17:26:02.905593 3286 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 12 17:26:02.905696 kubelet[3286]: E1212 17:26:02.905637 3286 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:26:02.906339 kubelet[3286]: W1212 17:26:02.906206 3286 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-74f46d5ce1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 12 17:26:02.906517 kubelet[3286]: I1212 17:26:02.906410 3286 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:26:02.906901 kubelet[3286]: I1212 17:26:02.906886 3286 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:26:02.907072 kubelet[3286]: W1212 17:26:02.907051 3286 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:26:02.907290 kubelet[3286]: E1212 17:26:02.906967 3286 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-74f46d5ce1&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:26:02.907748 kubelet[3286]: I1212 17:26:02.907733 3286 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:26:02.910172 kubelet[3286]: I1212 17:26:02.910153 3286 server.go:1287] "Started kubelet" Dec 12 17:26:02.911244 kubelet[3286]: I1212 17:26:02.911226 3286 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:26:02.912000 audit[3298]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:02.912000 audit[3298]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe37e6af0 a2=0 a3=0 items=0 ppid=3286 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:26:02.914000 audit[3299]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:02.914000 audit[3299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6a04a60 a2=0 a3=0 items=0 ppid=3286 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:26:02.916984 kubelet[3286]: E1212 17:26:02.915621 3286 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-a-74f46d5ce1.188087cbb4c27c2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-74f46d5ce1,UID:ci-4515.1.0-a-74f46d5ce1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-74f46d5ce1,},FirstTimestamp:2025-12-12 17:26:02.909989932 +0000 UTC m=+0.638627626,LastTimestamp:2025-12-12 17:26:02.909989932 +0000 UTC m=+0.638627626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-74f46d5ce1,}" Dec 12 17:26:02.916984 kubelet[3286]: I1212 17:26:02.915750 3286 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:26:02.916984 kubelet[3286]: I1212 17:26:02.916337 3286 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:26:02.918098 kubelet[3286]: I1212 17:26:02.918044 3286 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:26:02.918276 kubelet[3286]: I1212 17:26:02.918256 3286 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:26:02.918422 kubelet[3286]: I1212 17:26:02.918405 3286 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:26:02.919241 kubelet[3286]: I1212 17:26:02.918934 3286 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:26:02.919241 kubelet[3286]: E1212 17:26:02.919107 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:02.920251 kubelet[3286]: I1212 17:26:02.919799 3286 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:26:02.920251 kubelet[3286]: I1212 17:26:02.919907 3286 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:26:02.921230 kubelet[3286]: E1212 17:26:02.921184 3286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-74f46d5ce1?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Dec 12 17:26:02.920000 audit[3301]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:02.921510 kubelet[3286]: I1212 17:26:02.921452 3286 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:26:02.920000 audit[3301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe8a99220 a2=0 a3=0 items=0 ppid=3286 pid=3301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.920000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:02.922610 kubelet[3286]: I1212 17:26:02.922594 3286 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:26:02.922725 kubelet[3286]: I1212 17:26:02.922716 3286 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:26:02.923000 audit[3303]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:02.923000 audit[3303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffc99a840 a2=0 a3=0 items=0 ppid=3286 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:02.942725 kubelet[3286]: E1212 17:26:02.942702 3286 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:26:02.942967 kubelet[3286]: W1212 17:26:02.942780 3286 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 12 17:26:02.942967 kubelet[3286]: E1212 17:26:02.942813 3286 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:26:02.947147 kubelet[3286]: I1212 17:26:02.947106 3286 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:26:02.947147 kubelet[3286]: I1212 17:26:02.947117 3286 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:26:02.947147 kubelet[3286]: I1212 17:26:02.947132 3286 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:02.956223 kubelet[3286]: I1212 17:26:02.956014 3286 policy_none.go:49] "None policy: Start" Dec 12 17:26:02.956223 kubelet[3286]: I1212 17:26:02.956045 3286 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:26:02.956223 kubelet[3286]: I1212 17:26:02.956055 3286 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:26:02.965149 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:26:02.972000 audit[3309]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:02.972000 audit[3309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffed5a2c0 a2=0 a3=0 items=0 ppid=3286 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.972000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 17:26:02.974054 kubelet[3286]: I1212 17:26:02.973942 3286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:26:02.973000 audit[3310]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3310 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:02.973000 audit[3310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc5f40070 a2=0 a3=0 items=0 ppid=3286 pid=3310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:26:02.974802 kubelet[3286]: I1212 17:26:02.974781 3286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:26:02.974829 kubelet[3286]: I1212 17:26:02.974804 3286 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:26:02.974829 kubelet[3286]: I1212 17:26:02.974821 3286 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:26:02.974829 kubelet[3286]: I1212 17:26:02.974827 3286 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:26:02.974894 kubelet[3286]: E1212 17:26:02.974871 3286 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:26:02.974000 audit[3311]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:02.974000 audit[3311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc178dae0 a2=0 a3=0 items=0 ppid=3286 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.974000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:26:02.975000 audit[3312]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:02.975000 audit[3312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca47dd80 a2=0 a3=0 items=0 ppid=3286 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.975000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:26:02.976000 audit[3313]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:02.976000 audit[3313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1f35b30 a2=0 a3=0 items=0 ppid=3286 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:26:02.977000 audit[3314]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3314 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:02.977000 audit[3314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc160e90 a2=0 a3=0 items=0 ppid=3286 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:26:02.978000 audit[3315]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:02.978000 audit[3315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2d1e210 a2=0 a3=0 items=0 ppid=3286 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.978000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:26:02.979000 audit[3316]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:02.979000 audit[3316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf6df6c0 a2=0 a3=0 items=0 ppid=3286 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:02.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:26:02.980649 kubelet[3286]: W1212 17:26:02.980622 3286 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 12 17:26:02.980684 kubelet[3286]: E1212 17:26:02.980652 3286 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:26:02.984352 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:26:02.987214 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:26:02.998702 kubelet[3286]: I1212 17:26:02.998610 3286 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:26:02.998793 kubelet[3286]: I1212 17:26:02.998775 3286 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:26:02.998818 kubelet[3286]: I1212 17:26:02.998792 3286 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:26:02.999205 kubelet[3286]: I1212 17:26:02.999185 3286 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:26:03.000455 kubelet[3286]: E1212 17:26:03.000438 3286 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:26:03.000613 kubelet[3286]: E1212 17:26:03.000540 3286 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:03.083748 systemd[1]: Created slice kubepods-burstable-poda295f4c13f275486a35185c08e8ce2d6.slice - libcontainer container kubepods-burstable-poda295f4c13f275486a35185c08e8ce2d6.slice. Dec 12 17:26:03.094637 kubelet[3286]: E1212 17:26:03.094445 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.097654 systemd[1]: Created slice kubepods-burstable-podd0684db336fbc9f987f4c12ae9b3fbef.slice - libcontainer container kubepods-burstable-podd0684db336fbc9f987f4c12ae9b3fbef.slice. Dec 12 17:26:03.100646 kubelet[3286]: I1212 17:26:03.100623 3286 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.101041 kubelet[3286]: E1212 17:26:03.101018 3286 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.103730 kubelet[3286]: E1212 17:26:03.103710 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.106113 systemd[1]: Created slice kubepods-burstable-poda45aa045e70900ba1bec36b18b058c97.slice - libcontainer container kubepods-burstable-poda45aa045e70900ba1bec36b18b058c97.slice. Dec 12 17:26:03.107771 kubelet[3286]: E1212 17:26:03.107582 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.122894 kubelet[3286]: E1212 17:26:03.122302 3286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-74f46d5ce1?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Dec 12 17:26:03.123641 kubelet[3286]: I1212 17:26:03.123616 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.123694 kubelet[3286]: I1212 17:26:03.123646 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a295f4c13f275486a35185c08e8ce2d6-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-74f46d5ce1\" (UID: \"a295f4c13f275486a35185c08e8ce2d6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.123694 kubelet[3286]: I1212 17:26:03.123659 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a295f4c13f275486a35185c08e8ce2d6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-74f46d5ce1\" (UID: \"a295f4c13f275486a35185c08e8ce2d6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.123694 kubelet[3286]: I1212 17:26:03.123670 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.123694 kubelet[3286]: I1212 17:26:03.123680 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.123694 kubelet[3286]: I1212 17:26:03.123690 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.123771 kubelet[3286]: I1212 17:26:03.123699 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a295f4c13f275486a35185c08e8ce2d6-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-74f46d5ce1\" (UID: \"a295f4c13f275486a35185c08e8ce2d6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.123771 kubelet[3286]: I1212 17:26:03.123709 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.123771 kubelet[3286]: I1212 17:26:03.123718 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a45aa045e70900ba1bec36b18b058c97-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-74f46d5ce1\" (UID: \"a45aa045e70900ba1bec36b18b058c97\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.303629 kubelet[3286]: I1212 17:26:03.303593 3286 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.303994 kubelet[3286]: E1212 17:26:03.303936 3286 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.396583 containerd[2108]: time="2025-12-12T17:26:03.396445317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-74f46d5ce1,Uid:a295f4c13f275486a35185c08e8ce2d6,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:03.404581 containerd[2108]: time="2025-12-12T17:26:03.404544921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-74f46d5ce1,Uid:d0684db336fbc9f987f4c12ae9b3fbef,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:03.408962 containerd[2108]: time="2025-12-12T17:26:03.408932703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-74f46d5ce1,Uid:a45aa045e70900ba1bec36b18b058c97,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:03.482374 containerd[2108]: time="2025-12-12T17:26:03.482335201Z" level=info msg="connecting to shim 8144a426d35cea093878a822e60ddafc97775445573b9c756b776c34ff36b336" address="unix:///run/containerd/s/cd750b601291e670a7d16f15ef515c3ad211b727cbe12bb832ca2ab4fb8b721f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:03.507152 systemd[1]: Started cri-containerd-8144a426d35cea093878a822e60ddafc97775445573b9c756b776c34ff36b336.scope - libcontainer container 8144a426d35cea093878a822e60ddafc97775445573b9c756b776c34ff36b336. Dec 12 17:26:03.509565 containerd[2108]: time="2025-12-12T17:26:03.509429406Z" level=info msg="connecting to shim 2ca8faa9aa1922d718087640b88368929eef42ca14597e30230515a6a4a5ccc3" address="unix:///run/containerd/s/4fcd9e9e3fa623a7efc31f05dba3872ac01e713db7ca9e644a4fdb2caf9506fb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:03.512654 containerd[2108]: time="2025-12-12T17:26:03.512512174Z" level=info msg="connecting to shim 23c19144861afa10f9a4eb06144f4e0f9bfdbcbe0e9ca2f04327fce79b2019e4" address="unix:///run/containerd/s/59f2228b92b05f69f542dd11c92ebd6134bce8e3065f25823b1db1bcb2160640" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:03.525181 kubelet[3286]: E1212 17:26:03.525122 3286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-74f46d5ce1?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Dec 12 17:26:03.526000 audit: BPF prog-id=107 op=LOAD Dec 12 17:26:03.528000 audit: BPF prog-id=108 op=LOAD Dec 12 17:26:03.528000 audit[3337]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343461343236643335636561303933383738613832326536306464 Dec 12 17:26:03.528000 audit: BPF prog-id=108 op=UNLOAD Dec 12 17:26:03.528000 audit[3337]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343461343236643335636561303933383738613832326536306464 Dec 12 17:26:03.528000 audit: BPF prog-id=109 op=LOAD Dec 12 17:26:03.528000 audit[3337]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343461343236643335636561303933383738613832326536306464 Dec 12 17:26:03.528000 audit: BPF prog-id=110 op=LOAD Dec 12 17:26:03.528000 audit[3337]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343461343236643335636561303933383738613832326536306464 Dec 12 17:26:03.529000 audit: BPF prog-id=110 op=UNLOAD Dec 12 17:26:03.529000 audit[3337]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343461343236643335636561303933383738613832326536306464 Dec 12 17:26:03.529000 audit: BPF prog-id=109 op=UNLOAD Dec 12 17:26:03.529000 audit[3337]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343461343236643335636561303933383738613832326536306464 Dec 12 17:26:03.529000 audit: BPF prog-id=111 op=LOAD Dec 12 17:26:03.529000 audit[3337]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3324 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831343461343236643335636561303933383738613832326536306464 Dec 12 17:26:03.539014 systemd[1]: Started cri-containerd-2ca8faa9aa1922d718087640b88368929eef42ca14597e30230515a6a4a5ccc3.scope - libcontainer container 2ca8faa9aa1922d718087640b88368929eef42ca14597e30230515a6a4a5ccc3. Dec 12 17:26:03.543890 systemd[1]: Started cri-containerd-23c19144861afa10f9a4eb06144f4e0f9bfdbcbe0e9ca2f04327fce79b2019e4.scope - libcontainer container 23c19144861afa10f9a4eb06144f4e0f9bfdbcbe0e9ca2f04327fce79b2019e4. Dec 12 17:26:03.558000 audit: BPF prog-id=112 op=LOAD Dec 12 17:26:03.559000 audit: BPF prog-id=113 op=LOAD Dec 12 17:26:03.559000 audit[3398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3365 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633139313434383631616661313066396134656230363134346634 Dec 12 17:26:03.559000 audit: BPF prog-id=113 op=UNLOAD Dec 12 17:26:03.559000 audit[3398]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633139313434383631616661313066396134656230363134346634 Dec 12 17:26:03.559000 audit: BPF prog-id=114 op=LOAD Dec 12 17:26:03.559000 audit[3398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3365 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633139313434383631616661313066396134656230363134346634 Dec 12 17:26:03.559000 audit: BPF prog-id=115 op=LOAD Dec 12 17:26:03.559000 audit[3398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3365 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633139313434383631616661313066396134656230363134346634 Dec 12 17:26:03.559000 audit: BPF prog-id=115 op=UNLOAD Dec 12 17:26:03.559000 audit[3398]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633139313434383631616661313066396134656230363134346634 Dec 12 17:26:03.560000 audit: BPF prog-id=114 op=UNLOAD Dec 12 17:26:03.560000 audit[3398]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633139313434383631616661313066396134656230363134346634 Dec 12 17:26:03.560000 audit: BPF prog-id=116 op=LOAD Dec 12 17:26:03.560000 audit: BPF prog-id=117 op=LOAD Dec 12 17:26:03.561000 audit: BPF prog-id=118 op=LOAD Dec 12 17:26:03.561000 audit[3382]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=3363 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613866616139616131393232643731383038373634306238383336 Dec 12 17:26:03.561000 audit: BPF prog-id=118 op=UNLOAD Dec 12 17:26:03.561000 audit[3382]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613866616139616131393232643731383038373634306238383336 Dec 12 17:26:03.561000 audit: BPF prog-id=119 op=LOAD Dec 12 17:26:03.561000 audit[3382]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=3363 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613866616139616131393232643731383038373634306238383336 Dec 12 17:26:03.561000 audit: BPF prog-id=120 op=LOAD Dec 12 17:26:03.561000 audit[3382]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=3363 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613866616139616131393232643731383038373634306238383336 Dec 12 17:26:03.561000 audit: BPF prog-id=120 op=UNLOAD Dec 12 17:26:03.561000 audit[3382]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613866616139616131393232643731383038373634306238383336 Dec 12 17:26:03.561000 audit: BPF prog-id=119 op=UNLOAD Dec 12 17:26:03.561000 audit[3382]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613866616139616131393232643731383038373634306238383336 Dec 12 17:26:03.561000 audit: BPF prog-id=121 op=LOAD Dec 12 17:26:03.561000 audit[3382]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=3363 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263613866616139616131393232643731383038373634306238383336 Dec 12 17:26:03.560000 audit[3398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3365 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633139313434383631616661313066396134656230363134346634 Dec 12 17:26:03.570416 containerd[2108]: time="2025-12-12T17:26:03.570312221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-74f46d5ce1,Uid:a295f4c13f275486a35185c08e8ce2d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8144a426d35cea093878a822e60ddafc97775445573b9c756b776c34ff36b336\"" Dec 12 17:26:03.573438 containerd[2108]: time="2025-12-12T17:26:03.573409093Z" level=info msg="CreateContainer within sandbox \"8144a426d35cea093878a822e60ddafc97775445573b9c756b776c34ff36b336\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:26:03.592424 containerd[2108]: time="2025-12-12T17:26:03.592338821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-74f46d5ce1,Uid:a45aa045e70900ba1bec36b18b058c97,Namespace:kube-system,Attempt:0,} returns sandbox id \"23c19144861afa10f9a4eb06144f4e0f9bfdbcbe0e9ca2f04327fce79b2019e4\"" Dec 12 17:26:03.594535 containerd[2108]: time="2025-12-12T17:26:03.594506375Z" level=info msg="CreateContainer within sandbox \"23c19144861afa10f9a4eb06144f4e0f9bfdbcbe0e9ca2f04327fce79b2019e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:26:03.595983 containerd[2108]: time="2025-12-12T17:26:03.595954961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-74f46d5ce1,Uid:d0684db336fbc9f987f4c12ae9b3fbef,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ca8faa9aa1922d718087640b88368929eef42ca14597e30230515a6a4a5ccc3\"" Dec 12 17:26:03.597687 containerd[2108]: time="2025-12-12T17:26:03.597658369Z" level=info msg="CreateContainer within sandbox \"2ca8faa9aa1922d718087640b88368929eef42ca14597e30230515a6a4a5ccc3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:26:03.603881 containerd[2108]: time="2025-12-12T17:26:03.603736590Z" level=info msg="Container d1d82f09d2d3d4829e88e624e3a5d1c24a004b9ec805615a77581179467ff5c8: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:03.632839 containerd[2108]: time="2025-12-12T17:26:03.632798865Z" level=info msg="Container 2c0b40b81299babc13a58b6b90a7b72652d60c6c7c483e2cff30e2af0c4e98be: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:03.642786 containerd[2108]: time="2025-12-12T17:26:03.642755465Z" level=info msg="Container 91dc9194703e34e71ff99544561eb03067422f999f82dc0d8eff3103647623e9: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:03.644135 containerd[2108]: time="2025-12-12T17:26:03.644106616Z" level=info msg="CreateContainer within sandbox \"8144a426d35cea093878a822e60ddafc97775445573b9c756b776c34ff36b336\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d1d82f09d2d3d4829e88e624e3a5d1c24a004b9ec805615a77581179467ff5c8\"" Dec 12 17:26:03.644828 containerd[2108]: time="2025-12-12T17:26:03.644592403Z" level=info msg="StartContainer for \"d1d82f09d2d3d4829e88e624e3a5d1c24a004b9ec805615a77581179467ff5c8\"" Dec 12 17:26:03.649935 containerd[2108]: time="2025-12-12T17:26:03.647974514Z" level=info msg="connecting to shim d1d82f09d2d3d4829e88e624e3a5d1c24a004b9ec805615a77581179467ff5c8" address="unix:///run/containerd/s/cd750b601291e670a7d16f15ef515c3ad211b727cbe12bb832ca2ab4fb8b721f" protocol=ttrpc version=3 Dec 12 17:26:03.658349 containerd[2108]: time="2025-12-12T17:26:03.658313402Z" level=info msg="CreateContainer within sandbox \"23c19144861afa10f9a4eb06144f4e0f9bfdbcbe0e9ca2f04327fce79b2019e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c0b40b81299babc13a58b6b90a7b72652d60c6c7c483e2cff30e2af0c4e98be\"" Dec 12 17:26:03.660052 containerd[2108]: time="2025-12-12T17:26:03.660017362Z" level=info msg="StartContainer for \"2c0b40b81299babc13a58b6b90a7b72652d60c6c7c483e2cff30e2af0c4e98be\"" Dec 12 17:26:03.661024 containerd[2108]: time="2025-12-12T17:26:03.660998944Z" level=info msg="connecting to shim 2c0b40b81299babc13a58b6b90a7b72652d60c6c7c483e2cff30e2af0c4e98be" address="unix:///run/containerd/s/59f2228b92b05f69f542dd11c92ebd6134bce8e3065f25823b1db1bcb2160640" protocol=ttrpc version=3 Dec 12 17:26:03.668051 systemd[1]: Started cri-containerd-d1d82f09d2d3d4829e88e624e3a5d1c24a004b9ec805615a77581179467ff5c8.scope - libcontainer container d1d82f09d2d3d4829e88e624e3a5d1c24a004b9ec805615a77581179467ff5c8. Dec 12 17:26:03.676019 containerd[2108]: time="2025-12-12T17:26:03.675610196Z" level=info msg="CreateContainer within sandbox \"2ca8faa9aa1922d718087640b88368929eef42ca14597e30230515a6a4a5ccc3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"91dc9194703e34e71ff99544561eb03067422f999f82dc0d8eff3103647623e9\"" Dec 12 17:26:03.677608 containerd[2108]: time="2025-12-12T17:26:03.677510192Z" level=info msg="StartContainer for \"91dc9194703e34e71ff99544561eb03067422f999f82dc0d8eff3103647623e9\"" Dec 12 17:26:03.679602 containerd[2108]: time="2025-12-12T17:26:03.679578392Z" level=info msg="connecting to shim 91dc9194703e34e71ff99544561eb03067422f999f82dc0d8eff3103647623e9" address="unix:///run/containerd/s/4fcd9e9e3fa623a7efc31f05dba3872ac01e713db7ca9e644a4fdb2caf9506fb" protocol=ttrpc version=3 Dec 12 17:26:03.685317 systemd[1]: Started cri-containerd-2c0b40b81299babc13a58b6b90a7b72652d60c6c7c483e2cff30e2af0c4e98be.scope - libcontainer container 2c0b40b81299babc13a58b6b90a7b72652d60c6c7c483e2cff30e2af0c4e98be. Dec 12 17:26:03.686000 audit: BPF prog-id=122 op=LOAD Dec 12 17:26:03.687000 audit: BPF prog-id=123 op=LOAD Dec 12 17:26:03.687000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3324 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431643832663039643264336434383239653838653632346533613564 Dec 12 17:26:03.687000 audit: BPF prog-id=123 op=UNLOAD Dec 12 17:26:03.687000 audit[3452]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431643832663039643264336434383239653838653632346533613564 Dec 12 17:26:03.687000 audit: BPF prog-id=124 op=LOAD Dec 12 17:26:03.687000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3324 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431643832663039643264336434383239653838653632346533613564 Dec 12 17:26:03.687000 audit: BPF prog-id=125 op=LOAD Dec 12 17:26:03.687000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3324 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431643832663039643264336434383239653838653632346533613564 Dec 12 17:26:03.687000 audit: BPF prog-id=125 op=UNLOAD Dec 12 17:26:03.687000 audit[3452]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431643832663039643264336434383239653838653632346533613564 Dec 12 17:26:03.687000 audit: BPF prog-id=124 op=UNLOAD Dec 12 17:26:03.687000 audit[3452]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431643832663039643264336434383239653838653632346533613564 Dec 12 17:26:03.687000 audit: BPF prog-id=126 op=LOAD Dec 12 17:26:03.687000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3324 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431643832663039643264336434383239653838653632346533613564 Dec 12 17:26:03.701049 systemd[1]: Started cri-containerd-91dc9194703e34e71ff99544561eb03067422f999f82dc0d8eff3103647623e9.scope - libcontainer container 91dc9194703e34e71ff99544561eb03067422f999f82dc0d8eff3103647623e9. Dec 12 17:26:03.706619 kubelet[3286]: I1212 17:26:03.706255 3286 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.706619 kubelet[3286]: E1212 17:26:03.706556 3286 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.712000 audit: BPF prog-id=127 op=LOAD Dec 12 17:26:03.712000 audit: BPF prog-id=128 op=LOAD Dec 12 17:26:03.712000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3365 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263306234306238313239396261626331336135386236623930613762 Dec 12 17:26:03.713000 audit: BPF prog-id=128 op=UNLOAD Dec 12 17:26:03.713000 audit[3464]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263306234306238313239396261626331336135386236623930613762 Dec 12 17:26:03.714000 audit: BPF prog-id=129 op=LOAD Dec 12 17:26:03.714000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3365 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263306234306238313239396261626331336135386236623930613762 Dec 12 17:26:03.715000 audit: BPF prog-id=130 op=LOAD Dec 12 17:26:03.715000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3365 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263306234306238313239396261626331336135386236623930613762 Dec 12 17:26:03.716000 audit: BPF prog-id=130 op=UNLOAD Dec 12 17:26:03.716000 audit[3464]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263306234306238313239396261626331336135386236623930613762 Dec 12 17:26:03.716000 audit: BPF prog-id=129 op=UNLOAD Dec 12 17:26:03.716000 audit[3464]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263306234306238313239396261626331336135386236623930613762 Dec 12 17:26:03.717000 audit: BPF prog-id=131 op=LOAD Dec 12 17:26:03.717000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3365 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263306234306238313239396261626331336135386236623930613762 Dec 12 17:26:03.722519 kubelet[3286]: W1212 17:26:03.722464 3286 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-74f46d5ce1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 12 17:26:03.722659 kubelet[3286]: E1212 17:26:03.722616 3286 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-74f46d5ce1&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:26:03.727118 containerd[2108]: time="2025-12-12T17:26:03.727035735Z" level=info msg="StartContainer for \"d1d82f09d2d3d4829e88e624e3a5d1c24a004b9ec805615a77581179467ff5c8\" returns successfully" Dec 12 17:26:03.729000 audit: BPF prog-id=132 op=LOAD Dec 12 17:26:03.730000 audit: BPF prog-id=133 op=LOAD Dec 12 17:26:03.730000 audit[3484]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3363 pid=3484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931646339313934373033653334653731666639393534343536316562 Dec 12 17:26:03.730000 audit: BPF prog-id=133 op=UNLOAD Dec 12 17:26:03.730000 audit[3484]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931646339313934373033653334653731666639393534343536316562 Dec 12 17:26:03.730000 audit: BPF prog-id=134 op=LOAD Dec 12 17:26:03.730000 audit[3484]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3363 pid=3484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931646339313934373033653334653731666639393534343536316562 Dec 12 17:26:03.730000 audit: BPF prog-id=135 op=LOAD Dec 12 17:26:03.730000 audit[3484]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3363 pid=3484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931646339313934373033653334653731666639393534343536316562 Dec 12 17:26:03.730000 audit: BPF prog-id=135 op=UNLOAD Dec 12 17:26:03.730000 audit[3484]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931646339313934373033653334653731666639393534343536316562 Dec 12 17:26:03.730000 audit: BPF prog-id=134 op=UNLOAD Dec 12 17:26:03.730000 audit[3484]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3363 pid=3484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931646339313934373033653334653731666639393534343536316562 Dec 12 17:26:03.730000 audit: BPF prog-id=136 op=LOAD Dec 12 17:26:03.730000 audit[3484]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3363 pid=3484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:03.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931646339313934373033653334653731666639393534343536316562 Dec 12 17:26:03.770075 containerd[2108]: time="2025-12-12T17:26:03.770038062Z" level=info msg="StartContainer for \"2c0b40b81299babc13a58b6b90a7b72652d60c6c7c483e2cff30e2af0c4e98be\" returns successfully" Dec 12 17:26:03.780727 containerd[2108]: time="2025-12-12T17:26:03.780693902Z" level=info msg="StartContainer for \"91dc9194703e34e71ff99544561eb03067422f999f82dc0d8eff3103647623e9\" returns successfully" Dec 12 17:26:03.988805 kubelet[3286]: E1212 17:26:03.988700 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.993034 kubelet[3286]: E1212 17:26:03.992918 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:03.995842 kubelet[3286]: E1212 17:26:03.995741 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:04.509406 kubelet[3286]: I1212 17:26:04.509283 3286 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:04.998092 kubelet[3286]: E1212 17:26:04.997122 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:04.998415 kubelet[3286]: E1212 17:26:04.998037 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:05.092236 kubelet[3286]: E1212 17:26:05.092187 3286 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:05.291145 kubelet[3286]: I1212 17:26:05.290674 3286 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:05.291145 kubelet[3286]: E1212 17:26:05.290713 3286 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-a-74f46d5ce1\": node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:05.317907 kubelet[3286]: E1212 17:26:05.317881 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:05.418959 kubelet[3286]: E1212 17:26:05.418912 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:05.519815 kubelet[3286]: E1212 17:26:05.519764 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:05.620266 kubelet[3286]: E1212 17:26:05.620224 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:05.720720 kubelet[3286]: E1212 17:26:05.720674 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:05.821165 kubelet[3286]: E1212 17:26:05.821113 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:05.922369 kubelet[3286]: E1212 17:26:05.922113 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:05.998507 kubelet[3286]: E1212 17:26:05.998356 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:05.998507 kubelet[3286]: E1212 17:26:05.998420 3286 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:06.023142 kubelet[3286]: E1212 17:26:06.023097 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:06.120298 kubelet[3286]: I1212 17:26:06.120258 3286 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:06.152931 kubelet[3286]: W1212 17:26:06.152892 3286 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 17:26:06.153072 kubelet[3286]: I1212 17:26:06.153045 3286 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:06.161426 kubelet[3286]: W1212 17:26:06.161196 3286 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 17:26:06.161426 kubelet[3286]: I1212 17:26:06.161277 3286 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:06.167803 kubelet[3286]: W1212 17:26:06.167603 3286 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 17:26:06.902752 kubelet[3286]: I1212 17:26:06.902710 3286 apiserver.go:52] "Watching apiserver" Dec 12 17:26:06.922904 kubelet[3286]: I1212 17:26:06.922850 3286 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:26:07.365121 systemd[1]: Reload requested from client PID 3551 ('systemctl') (unit session-9.scope)... Dec 12 17:26:07.365391 systemd[1]: Reloading... Dec 12 17:26:07.459900 zram_generator::config[3603]: No configuration found. Dec 12 17:26:07.619755 systemd[1]: Reloading finished in 254 ms. Dec 12 17:26:07.641643 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:07.655125 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:26:07.655353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:07.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.658794 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 17:26:07.658848 kernel: audit: type=1131 audit(1765560367.654:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.658916 systemd[1]: kubelet.service: Consumed 887ms CPU time, 127.7M memory peak. Dec 12 17:26:07.664129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:07.673000 audit: BPF prog-id=137 op=LOAD Dec 12 17:26:07.678883 kernel: audit: type=1334 audit(1765560367.673:428): prog-id=137 op=LOAD Dec 12 17:26:07.679000 audit: BPF prog-id=138 op=LOAD Dec 12 17:26:07.679000 audit: BPF prog-id=93 op=UNLOAD Dec 12 17:26:07.690832 kernel: audit: type=1334 audit(1765560367.679:429): prog-id=138 op=LOAD Dec 12 17:26:07.690904 kernel: audit: type=1334 audit(1765560367.679:430): prog-id=93 op=UNLOAD Dec 12 17:26:07.695639 kernel: audit: type=1334 audit(1765560367.679:431): prog-id=94 op=UNLOAD Dec 12 17:26:07.679000 audit: BPF prog-id=94 op=UNLOAD Dec 12 17:26:07.684000 audit: BPF prog-id=139 op=LOAD Dec 12 17:26:07.700657 kernel: audit: type=1334 audit(1765560367.684:432): prog-id=139 op=LOAD Dec 12 17:26:07.684000 audit: BPF prog-id=97 op=UNLOAD Dec 12 17:26:07.705150 kernel: audit: type=1334 audit(1765560367.684:433): prog-id=97 op=UNLOAD Dec 12 17:26:07.684000 audit: BPF prog-id=140 op=LOAD Dec 12 17:26:07.709731 kernel: audit: type=1334 audit(1765560367.684:434): prog-id=140 op=LOAD Dec 12 17:26:07.684000 audit: BPF prog-id=141 op=LOAD Dec 12 17:26:07.714338 kernel: audit: type=1334 audit(1765560367.684:435): prog-id=141 op=LOAD Dec 12 17:26:07.684000 audit: BPF prog-id=98 op=UNLOAD Dec 12 17:26:07.719038 kernel: audit: type=1334 audit(1765560367.684:436): prog-id=98 op=UNLOAD Dec 12 17:26:07.684000 audit: BPF prog-id=99 op=UNLOAD Dec 12 17:26:07.689000 audit: BPF prog-id=142 op=LOAD Dec 12 17:26:07.690000 audit: BPF prog-id=100 op=UNLOAD Dec 12 17:26:07.694000 audit: BPF prog-id=143 op=LOAD Dec 12 17:26:07.694000 audit: BPF prog-id=144 op=LOAD Dec 12 17:26:07.694000 audit: BPF prog-id=101 op=UNLOAD Dec 12 17:26:07.694000 audit: BPF prog-id=102 op=UNLOAD Dec 12 17:26:07.704000 audit: BPF prog-id=145 op=LOAD Dec 12 17:26:07.704000 audit: BPF prog-id=96 op=UNLOAD Dec 12 17:26:07.708000 audit: BPF prog-id=146 op=LOAD Dec 12 17:26:07.708000 audit: BPF prog-id=103 op=UNLOAD Dec 12 17:26:07.713000 audit: BPF prog-id=147 op=LOAD Dec 12 17:26:07.713000 audit: BPF prog-id=95 op=UNLOAD Dec 12 17:26:07.717000 audit: BPF prog-id=148 op=LOAD Dec 12 17:26:07.717000 audit: BPF prog-id=90 op=UNLOAD Dec 12 17:26:07.718000 audit: BPF prog-id=149 op=LOAD Dec 12 17:26:07.718000 audit: BPF prog-id=150 op=LOAD Dec 12 17:26:07.718000 audit: BPF prog-id=91 op=UNLOAD Dec 12 17:26:07.718000 audit: BPF prog-id=92 op=UNLOAD Dec 12 17:26:07.719000 audit: BPF prog-id=151 op=LOAD Dec 12 17:26:07.719000 audit: BPF prog-id=104 op=UNLOAD Dec 12 17:26:07.719000 audit: BPF prog-id=152 op=LOAD Dec 12 17:26:07.719000 audit: BPF prog-id=153 op=LOAD Dec 12 17:26:07.719000 audit: BPF prog-id=105 op=UNLOAD Dec 12 17:26:07.719000 audit: BPF prog-id=106 op=UNLOAD Dec 12 17:26:07.719000 audit: BPF prog-id=154 op=LOAD Dec 12 17:26:07.719000 audit: BPF prog-id=87 op=UNLOAD Dec 12 17:26:07.719000 audit: BPF prog-id=155 op=LOAD Dec 12 17:26:07.719000 audit: BPF prog-id=156 op=LOAD Dec 12 17:26:07.720000 audit: BPF prog-id=88 op=UNLOAD Dec 12 17:26:07.720000 audit: BPF prog-id=89 op=UNLOAD Dec 12 17:26:07.817437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:07.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:07.824124 (kubelet)[3664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:26:07.850864 kubelet[3664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:07.850864 kubelet[3664]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:26:07.850864 kubelet[3664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:07.850864 kubelet[3664]: I1212 17:26:07.850700 3664 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:26:07.855895 kubelet[3664]: I1212 17:26:07.855156 3664 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:26:07.855895 kubelet[3664]: I1212 17:26:07.855180 3664 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:26:07.855895 kubelet[3664]: I1212 17:26:07.855344 3664 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:26:07.856500 kubelet[3664]: I1212 17:26:07.856478 3664 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 17:26:07.858983 kubelet[3664]: I1212 17:26:07.858358 3664 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:26:07.863091 kubelet[3664]: I1212 17:26:07.863074 3664 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:26:07.867205 kubelet[3664]: I1212 17:26:07.867182 3664 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:26:07.867494 kubelet[3664]: I1212 17:26:07.867468 3664 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:26:07.867742 kubelet[3664]: I1212 17:26:07.867566 3664 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-74f46d5ce1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:26:07.867889 kubelet[3664]: I1212 17:26:07.867848 3664 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:26:07.867946 kubelet[3664]: I1212 17:26:07.867938 3664 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:26:07.868022 kubelet[3664]: I1212 17:26:07.868014 3664 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:07.868212 kubelet[3664]: I1212 17:26:07.868202 3664 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:26:07.868318 kubelet[3664]: I1212 17:26:07.868308 3664 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:26:07.868379 kubelet[3664]: I1212 17:26:07.868372 3664 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:26:07.868430 kubelet[3664]: I1212 17:26:07.868423 3664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:26:07.871300 kubelet[3664]: I1212 17:26:07.870477 3664 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:26:07.871300 kubelet[3664]: I1212 17:26:07.870800 3664 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:26:07.871300 kubelet[3664]: I1212 17:26:07.871177 3664 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:26:07.871300 kubelet[3664]: I1212 17:26:07.871201 3664 server.go:1287] "Started kubelet" Dec 12 17:26:07.873729 kubelet[3664]: I1212 17:26:07.873642 3664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:26:07.885370 kubelet[3664]: I1212 17:26:07.885333 3664 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:26:07.886732 kubelet[3664]: I1212 17:26:07.886290 3664 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:26:07.887068 kubelet[3664]: I1212 17:26:07.887005 3664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:26:07.887216 kubelet[3664]: I1212 17:26:07.887198 3664 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:26:07.887362 kubelet[3664]: I1212 17:26:07.887345 3664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:26:07.888640 kubelet[3664]: I1212 17:26:07.888611 3664 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:26:07.888787 kubelet[3664]: E1212 17:26:07.888767 3664 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-74f46d5ce1\" not found" Dec 12 17:26:07.890813 kubelet[3664]: I1212 17:26:07.890784 3664 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:26:07.890924 kubelet[3664]: I1212 17:26:07.890910 3664 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:26:07.893454 kubelet[3664]: I1212 17:26:07.893431 3664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:26:07.893985 kubelet[3664]: I1212 17:26:07.893954 3664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:26:07.896530 kubelet[3664]: I1212 17:26:07.896514 3664 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:26:07.896626 kubelet[3664]: I1212 17:26:07.896618 3664 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:26:07.896752 kubelet[3664]: I1212 17:26:07.896725 3664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:26:07.896752 kubelet[3664]: I1212 17:26:07.896748 3664 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:26:07.896808 kubelet[3664]: I1212 17:26:07.896766 3664 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:26:07.896808 kubelet[3664]: I1212 17:26:07.896771 3664 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:26:07.896842 kubelet[3664]: E1212 17:26:07.896804 3664 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:26:07.948600 kubelet[3664]: I1212 17:26:07.948569 3664 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:26:07.948920 kubelet[3664]: I1212 17:26:07.948903 3664 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:26:07.949892 kubelet[3664]: I1212 17:26:07.948993 3664 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:07.949892 kubelet[3664]: I1212 17:26:07.949233 3664 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:26:07.949892 kubelet[3664]: I1212 17:26:07.949249 3664 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:26:07.949892 kubelet[3664]: I1212 17:26:07.949268 3664 policy_none.go:49] "None policy: Start" Dec 12 17:26:07.949892 kubelet[3664]: I1212 17:26:07.949275 3664 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:26:07.949892 kubelet[3664]: I1212 17:26:07.949286 3664 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:26:07.949892 kubelet[3664]: I1212 17:26:07.949367 3664 state_mem.go:75] "Updated machine memory state" Dec 12 17:26:07.953441 kubelet[3664]: I1212 17:26:07.953422 3664 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:26:07.954429 kubelet[3664]: I1212 17:26:07.954411 3664 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:26:07.954972 kubelet[3664]: I1212 17:26:07.954878 3664 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:26:07.955709 kubelet[3664]: I1212 17:26:07.955697 3664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:26:07.957483 kubelet[3664]: E1212 17:26:07.957465 3664 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:26:07.998087 kubelet[3664]: I1212 17:26:07.998048 3664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:07.998227 kubelet[3664]: I1212 17:26:07.998212 3664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:07.998867 kubelet[3664]: I1212 17:26:07.998819 3664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.009671 kubelet[3664]: W1212 17:26:08.009632 3664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 17:26:08.009819 kubelet[3664]: E1212 17:26:08.009802 3664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-74f46d5ce1\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.016252 kubelet[3664]: W1212 17:26:08.016235 3664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 17:26:08.016425 kubelet[3664]: E1212 17:26:08.016400 3664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-74f46d5ce1\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.016472 kubelet[3664]: W1212 17:26:08.016362 3664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 17:26:08.016472 kubelet[3664]: E1212 17:26:08.016469 3664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.057732 kubelet[3664]: I1212 17:26:08.057686 3664 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.068046 kubelet[3664]: I1212 17:26:08.068020 3664 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.068108 kubelet[3664]: I1212 17:26:08.068093 3664 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.092416 kubelet[3664]: I1212 17:26:08.092388 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a45aa045e70900ba1bec36b18b058c97-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-74f46d5ce1\" (UID: \"a45aa045e70900ba1bec36b18b058c97\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.092416 kubelet[3664]: I1212 17:26:08.092422 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a295f4c13f275486a35185c08e8ce2d6-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-74f46d5ce1\" (UID: \"a295f4c13f275486a35185c08e8ce2d6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.092520 kubelet[3664]: I1212 17:26:08.092434 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.092520 kubelet[3664]: I1212 17:26:08.092447 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.092520 kubelet[3664]: I1212 17:26:08.092466 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a295f4c13f275486a35185c08e8ce2d6-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-74f46d5ce1\" (UID: \"a295f4c13f275486a35185c08e8ce2d6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.092520 kubelet[3664]: I1212 17:26:08.092477 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a295f4c13f275486a35185c08e8ce2d6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-74f46d5ce1\" (UID: \"a295f4c13f275486a35185c08e8ce2d6\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.092520 kubelet[3664]: I1212 17:26:08.092486 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.092599 kubelet[3664]: I1212 17:26:08.092497 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.092599 kubelet[3664]: I1212 17:26:08.092509 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0684db336fbc9f987f4c12ae9b3fbef-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-74f46d5ce1\" (UID: \"d0684db336fbc9f987f4c12ae9b3fbef\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.875832 kubelet[3664]: I1212 17:26:08.875745 3664 apiserver.go:52] "Watching apiserver" Dec 12 17:26:08.891589 kubelet[3664]: I1212 17:26:08.891538 3664 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:26:08.934198 kubelet[3664]: I1212 17:26:08.934091 3664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.934327 kubelet[3664]: I1212 17:26:08.934319 3664 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.946136 kubelet[3664]: W1212 17:26:08.946101 3664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 17:26:08.946409 kubelet[3664]: E1212 17:26:08.946275 3664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-74f46d5ce1\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.946519 kubelet[3664]: W1212 17:26:08.946507 3664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 17:26:08.946694 kubelet[3664]: E1212 17:26:08.946606 3664 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-74f46d5ce1\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:08.963449 kubelet[3664]: I1212 17:26:08.963402 3664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-a-74f46d5ce1" podStartSLOduration=2.963376626 podStartE2EDuration="2.963376626s" podCreationTimestamp="2025-12-12 17:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:08.963183964 +0000 UTC m=+1.136456593" watchObservedRunningTime="2025-12-12 17:26:08.963376626 +0000 UTC m=+1.136649255" Dec 12 17:26:08.983302 kubelet[3664]: I1212 17:26:08.982922 3664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-74f46d5ce1" podStartSLOduration=2.982908361 podStartE2EDuration="2.982908361s" podCreationTimestamp="2025-12-12 17:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:08.982333209 +0000 UTC m=+1.155605838" watchObservedRunningTime="2025-12-12 17:26:08.982908361 +0000 UTC m=+1.156180990" Dec 12 17:26:08.983302 kubelet[3664]: I1212 17:26:08.982983 3664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-a-74f46d5ce1" podStartSLOduration=2.982980435 podStartE2EDuration="2.982980435s" podCreationTimestamp="2025-12-12 17:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:08.973561597 +0000 UTC m=+1.146834234" watchObservedRunningTime="2025-12-12 17:26:08.982980435 +0000 UTC m=+1.156253064" Dec 12 17:26:13.729879 kubelet[3664]: I1212 17:26:13.729834 3664 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:26:13.730589 containerd[2108]: time="2025-12-12T17:26:13.730549158Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:26:13.731313 kubelet[3664]: I1212 17:26:13.730709 3664 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:26:14.799084 systemd[1]: Created slice kubepods-besteffort-pod1e996007_4e1f_4bda_abec_64a7a120d63d.slice - libcontainer container kubepods-besteffort-pod1e996007_4e1f_4bda_abec_64a7a120d63d.slice. Dec 12 17:26:14.824978 kubelet[3664]: I1212 17:26:14.824938 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e996007-4e1f-4bda-abec-64a7a120d63d-xtables-lock\") pod \"kube-proxy-ggwql\" (UID: \"1e996007-4e1f-4bda-abec-64a7a120d63d\") " pod="kube-system/kube-proxy-ggwql" Dec 12 17:26:14.824978 kubelet[3664]: I1212 17:26:14.824978 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e996007-4e1f-4bda-abec-64a7a120d63d-kube-proxy\") pod \"kube-proxy-ggwql\" (UID: \"1e996007-4e1f-4bda-abec-64a7a120d63d\") " pod="kube-system/kube-proxy-ggwql" Dec 12 17:26:14.824978 kubelet[3664]: I1212 17:26:14.824992 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e996007-4e1f-4bda-abec-64a7a120d63d-lib-modules\") pod \"kube-proxy-ggwql\" (UID: \"1e996007-4e1f-4bda-abec-64a7a120d63d\") " pod="kube-system/kube-proxy-ggwql" Dec 12 17:26:14.825361 kubelet[3664]: I1212 17:26:14.825006 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2s5b\" (UniqueName: \"kubernetes.io/projected/1e996007-4e1f-4bda-abec-64a7a120d63d-kube-api-access-z2s5b\") pod \"kube-proxy-ggwql\" (UID: \"1e996007-4e1f-4bda-abec-64a7a120d63d\") " pod="kube-system/kube-proxy-ggwql" Dec 12 17:26:14.858468 systemd[1]: Created slice kubepods-besteffort-poda3c7eb27_87d0_4c0b_8aaa_92383fad0bd7.slice - libcontainer container kubepods-besteffort-poda3c7eb27_87d0_4c0b_8aaa_92383fad0bd7.slice. Dec 12 17:26:14.926890 kubelet[3664]: I1212 17:26:14.926103 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgtc\" (UniqueName: \"kubernetes.io/projected/a3c7eb27-87d0-4c0b-8aaa-92383fad0bd7-kube-api-access-7xgtc\") pod \"tigera-operator-7dcd859c48-v4jgn\" (UID: \"a3c7eb27-87d0-4c0b-8aaa-92383fad0bd7\") " pod="tigera-operator/tigera-operator-7dcd859c48-v4jgn" Dec 12 17:26:14.926890 kubelet[3664]: I1212 17:26:14.926144 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3c7eb27-87d0-4c0b-8aaa-92383fad0bd7-var-lib-calico\") pod \"tigera-operator-7dcd859c48-v4jgn\" (UID: \"a3c7eb27-87d0-4c0b-8aaa-92383fad0bd7\") " pod="tigera-operator/tigera-operator-7dcd859c48-v4jgn" Dec 12 17:26:15.113159 containerd[2108]: time="2025-12-12T17:26:15.113124911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggwql,Uid:1e996007-4e1f-4bda-abec-64a7a120d63d,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:15.163800 containerd[2108]: time="2025-12-12T17:26:15.163769069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v4jgn,Uid:a3c7eb27-87d0-4c0b-8aaa-92383fad0bd7,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:26:15.166626 containerd[2108]: time="2025-12-12T17:26:15.166584098Z" level=info msg="connecting to shim 47d4b0c8b41ce4c6176b6b4f0704a8f945d34d181abcaf309f5546f2d3a9f1bf" address="unix:///run/containerd/s/649c5a93bdaaf9a3c8f8f98b14dc6088c38a6887e1cd1146ff1d1e4b1816bb63" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:15.190030 systemd[1]: Started cri-containerd-47d4b0c8b41ce4c6176b6b4f0704a8f945d34d181abcaf309f5546f2d3a9f1bf.scope - libcontainer container 47d4b0c8b41ce4c6176b6b4f0704a8f945d34d181abcaf309f5546f2d3a9f1bf. Dec 12 17:26:15.197000 audit: BPF prog-id=157 op=LOAD Dec 12 17:26:15.201671 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:26:15.201739 kernel: audit: type=1334 audit(1765560375.197:469): prog-id=157 op=LOAD Dec 12 17:26:15.210250 kernel: audit: type=1334 audit(1765560375.207:470): prog-id=158 op=LOAD Dec 12 17:26:15.207000 audit: BPF prog-id=158 op=LOAD Dec 12 17:26:15.207000 audit[3726]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.216095 containerd[2108]: time="2025-12-12T17:26:15.216057478Z" level=info msg="connecting to shim 981a1ec7d8fa3f18366c0d4eae17c37b0e59e40552ffb9f6d6ff13867db4057d" address="unix:///run/containerd/s/fa4ed51c081bececcebb621fd4bf01b32188b043d5318705a57f3d93b429deed" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:15.235013 kernel: audit: type=1300 audit(1765560375.207:470): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.253990 kernel: audit: type=1327 audit(1765560375.207:470): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.207000 audit: BPF prog-id=158 op=UNLOAD Dec 12 17:26:15.259663 kernel: audit: type=1334 audit(1765560375.207:471): prog-id=158 op=UNLOAD Dec 12 17:26:15.207000 audit[3726]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.279454 kernel: audit: type=1300 audit(1765560375.207:471): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.299427 kernel: audit: type=1327 audit(1765560375.207:471): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.207000 audit: BPF prog-id=159 op=LOAD Dec 12 17:26:15.304592 kernel: audit: type=1334 audit(1765560375.207:472): prog-id=159 op=LOAD Dec 12 17:26:15.207000 audit[3726]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.322105 kernel: audit: type=1300 audit(1765560375.207:472): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.338718 kernel: audit: type=1327 audit(1765560375.207:472): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.339888 containerd[2108]: time="2025-12-12T17:26:15.339634642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggwql,Uid:1e996007-4e1f-4bda-abec-64a7a120d63d,Namespace:kube-system,Attempt:0,} returns sandbox id \"47d4b0c8b41ce4c6176b6b4f0704a8f945d34d181abcaf309f5546f2d3a9f1bf\"" Dec 12 17:26:15.341237 systemd[1]: Started cri-containerd-981a1ec7d8fa3f18366c0d4eae17c37b0e59e40552ffb9f6d6ff13867db4057d.scope - libcontainer container 981a1ec7d8fa3f18366c0d4eae17c37b0e59e40552ffb9f6d6ff13867db4057d. Dec 12 17:26:15.207000 audit: BPF prog-id=160 op=LOAD Dec 12 17:26:15.207000 audit[3726]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.207000 audit: BPF prog-id=160 op=UNLOAD Dec 12 17:26:15.207000 audit[3726]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.207000 audit: BPF prog-id=159 op=UNLOAD Dec 12 17:26:15.207000 audit[3726]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.207000 audit: BPF prog-id=161 op=LOAD Dec 12 17:26:15.207000 audit[3726]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3715 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643462306338623431636534633631373662366234663037303461 Dec 12 17:26:15.345822 containerd[2108]: time="2025-12-12T17:26:15.345747368Z" level=info msg="CreateContainer within sandbox \"47d4b0c8b41ce4c6176b6b4f0704a8f945d34d181abcaf309f5546f2d3a9f1bf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:26:15.355000 audit: BPF prog-id=162 op=LOAD Dec 12 17:26:15.356000 audit: BPF prog-id=163 op=LOAD Dec 12 17:26:15.356000 audit[3767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3756 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938316131656337643866613366313833363663306434656165313763 Dec 12 17:26:15.356000 audit: BPF prog-id=163 op=UNLOAD Dec 12 17:26:15.356000 audit[3767]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3756 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938316131656337643866613366313833363663306434656165313763 Dec 12 17:26:15.356000 audit: BPF prog-id=164 op=LOAD Dec 12 17:26:15.356000 audit[3767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3756 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938316131656337643866613366313833363663306434656165313763 Dec 12 17:26:15.356000 audit: BPF prog-id=165 op=LOAD Dec 12 17:26:15.356000 audit[3767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3756 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938316131656337643866613366313833363663306434656165313763 Dec 12 17:26:15.356000 audit: BPF prog-id=165 op=UNLOAD Dec 12 17:26:15.356000 audit[3767]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3756 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938316131656337643866613366313833363663306434656165313763 Dec 12 17:26:15.356000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:26:15.356000 audit[3767]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3756 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938316131656337643866613366313833363663306434656165313763 Dec 12 17:26:15.356000 audit: BPF prog-id=166 op=LOAD Dec 12 17:26:15.356000 audit[3767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3756 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938316131656337643866613366313833363663306434656165313763 Dec 12 17:26:15.407300 containerd[2108]: time="2025-12-12T17:26:15.405991592Z" level=info msg="Container d184757accd312f1a7a2d6ee1cd5cd0cf53110c53741d35fee5a6a8db00900e4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:15.413254 containerd[2108]: time="2025-12-12T17:26:15.413214222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v4jgn,Uid:a3c7eb27-87d0-4c0b-8aaa-92383fad0bd7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"981a1ec7d8fa3f18366c0d4eae17c37b0e59e40552ffb9f6d6ff13867db4057d\"" Dec 12 17:26:15.416254 containerd[2108]: time="2025-12-12T17:26:15.416148366Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:26:15.433450 containerd[2108]: time="2025-12-12T17:26:15.433416377Z" level=info msg="CreateContainer within sandbox \"47d4b0c8b41ce4c6176b6b4f0704a8f945d34d181abcaf309f5546f2d3a9f1bf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d184757accd312f1a7a2d6ee1cd5cd0cf53110c53741d35fee5a6a8db00900e4\"" Dec 12 17:26:15.434000 containerd[2108]: time="2025-12-12T17:26:15.433972525Z" level=info msg="StartContainer for \"d184757accd312f1a7a2d6ee1cd5cd0cf53110c53741d35fee5a6a8db00900e4\"" Dec 12 17:26:15.435368 containerd[2108]: time="2025-12-12T17:26:15.435342251Z" level=info msg="connecting to shim d184757accd312f1a7a2d6ee1cd5cd0cf53110c53741d35fee5a6a8db00900e4" address="unix:///run/containerd/s/649c5a93bdaaf9a3c8f8f98b14dc6088c38a6887e1cd1146ff1d1e4b1816bb63" protocol=ttrpc version=3 Dec 12 17:26:15.458014 systemd[1]: Started cri-containerd-d184757accd312f1a7a2d6ee1cd5cd0cf53110c53741d35fee5a6a8db00900e4.scope - libcontainer container d184757accd312f1a7a2d6ee1cd5cd0cf53110c53741d35fee5a6a8db00900e4. Dec 12 17:26:15.507000 audit: BPF prog-id=167 op=LOAD Dec 12 17:26:15.507000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3715 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383437353761636364333132663161376132643665653163643563 Dec 12 17:26:15.507000 audit: BPF prog-id=168 op=LOAD Dec 12 17:26:15.507000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3715 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383437353761636364333132663161376132643665653163643563 Dec 12 17:26:15.507000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:26:15.507000 audit[3800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383437353761636364333132663161376132643665653163643563 Dec 12 17:26:15.507000 audit: BPF prog-id=167 op=UNLOAD Dec 12 17:26:15.507000 audit[3800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383437353761636364333132663161376132643665653163643563 Dec 12 17:26:15.507000 audit: BPF prog-id=169 op=LOAD Dec 12 17:26:15.507000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3715 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383437353761636364333132663161376132643665653163643563 Dec 12 17:26:15.526676 containerd[2108]: time="2025-12-12T17:26:15.526630571Z" level=info msg="StartContainer for \"d184757accd312f1a7a2d6ee1cd5cd0cf53110c53741d35fee5a6a8db00900e4\" returns successfully" Dec 12 17:26:15.616000 audit[3863]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.616000 audit[3863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffae1d8a0 a2=0 a3=1 items=0 ppid=3812 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.616000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:26:15.617000 audit[3864]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=3864 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.617000 audit[3864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa9ff580 a2=0 a3=1 items=0 ppid=3812 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:26:15.617000 audit[3865]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=3865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.617000 audit[3865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe46558e0 a2=0 a3=1 items=0 ppid=3812 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:26:15.617000 audit[3866]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=3866 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.617000 audit[3866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9554810 a2=0 a3=1 items=0 ppid=3812 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:26:15.618000 audit[3867]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_chain pid=3867 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.618000 audit[3867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb017900 a2=0 a3=1 items=0 ppid=3812 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.618000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:26:15.619000 audit[3868]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.619000 audit[3868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd85360b0 a2=0 a3=1 items=0 ppid=3812 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:26:15.721000 audit[3869]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.721000 audit[3869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe6c15ab0 a2=0 a3=1 items=0 ppid=3812 pid=3869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:26:15.723000 audit[3871]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.723000 audit[3871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd57e7960 a2=0 a3=1 items=0 ppid=3812 pid=3871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 17:26:15.726000 audit[3874]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.726000 audit[3874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffd47c220 a2=0 a3=1 items=0 ppid=3812 pid=3874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 17:26:15.728000 audit[3875]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.728000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd41a7c50 a2=0 a3=1 items=0 ppid=3812 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:26:15.730000 audit[3877]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.730000 audit[3877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff51158d0 a2=0 a3=1 items=0 ppid=3812 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.730000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:26:15.731000 audit[3878]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.731000 audit[3878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5f57230 a2=0 a3=1 items=0 ppid=3812 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:26:15.733000 audit[3880]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.733000 audit[3880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffecec34c0 a2=0 a3=1 items=0 ppid=3812 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:26:15.736000 audit[3883]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.736000 audit[3883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff754f320 a2=0 a3=1 items=0 ppid=3812 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 17:26:15.737000 audit[3884]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.737000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe19a2040 a2=0 a3=1 items=0 ppid=3812 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:26:15.739000 audit[3886]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.739000 audit[3886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd06d68c0 a2=0 a3=1 items=0 ppid=3812 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:26:15.740000 audit[3887]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.740000 audit[3887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc09ded40 a2=0 a3=1 items=0 ppid=3812 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:26:15.742000 audit[3889]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.742000 audit[3889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd61fd980 a2=0 a3=1 items=0 ppid=3812 pid=3889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:26:15.745000 audit[3892]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.745000 audit[3892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff93ce120 a2=0 a3=1 items=0 ppid=3812 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:26:15.748000 audit[3895]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.748000 audit[3895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe771eb80 a2=0 a3=1 items=0 ppid=3812 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.748000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:26:15.749000 audit[3896]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.749000 audit[3896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdb175840 a2=0 a3=1 items=0 ppid=3812 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:26:15.751000 audit[3898]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.751000 audit[3898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff69e98b0 a2=0 a3=1 items=0 ppid=3812 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:15.754000 audit[3901]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.754000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffb8e3b50 a2=0 a3=1 items=0 ppid=3812 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:15.755000 audit[3902]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.755000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffeca85e0 a2=0 a3=1 items=0 ppid=3812 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:26:15.757000 audit[3904]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:15.757000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffdf80eaa0 a2=0 a3=1 items=0 ppid=3812 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:26:15.844000 audit[3910]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:15.844000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd88f2570 a2=0 a3=1 items=0 ppid=3812 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:15.850000 audit[3910]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:15.850000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd88f2570 a2=0 a3=1 items=0 ppid=3812 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:15.852000 audit[3915]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.852000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffa62b5c0 a2=0 a3=1 items=0 ppid=3812 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.852000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:26:15.854000 audit[3917]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.854000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcb84df90 a2=0 a3=1 items=0 ppid=3812 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 17:26:15.859000 audit[3920]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.859000 audit[3920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffead93af0 a2=0 a3=1 items=0 ppid=3812 pid=3920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 17:26:15.860000 audit[3921]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.860000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc48ba9a0 a2=0 a3=1 items=0 ppid=3812 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:26:15.863000 audit[3923]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.863000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffceae7980 a2=0 a3=1 items=0 ppid=3812 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:26:15.864000 audit[3924]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.864000 audit[3924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeadf9650 a2=0 a3=1 items=0 ppid=3812 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:26:15.866000 audit[3926]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.866000 audit[3926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffea0389a0 a2=0 a3=1 items=0 ppid=3812 pid=3926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 17:26:15.869000 audit[3929]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.869000 audit[3929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc3b308f0 a2=0 a3=1 items=0 ppid=3812 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:26:15.870000 audit[3930]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.870000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4f590d0 a2=0 a3=1 items=0 ppid=3812 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:26:15.872000 audit[3932]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.872000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff5c685f0 a2=0 a3=1 items=0 ppid=3812 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:26:15.873000 audit[3933]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.873000 audit[3933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd81316b0 a2=0 a3=1 items=0 ppid=3812 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.873000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:26:15.875000 audit[3935]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.875000 audit[3935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdb1b62a0 a2=0 a3=1 items=0 ppid=3812 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.875000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:26:15.878000 audit[3938]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.878000 audit[3938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc53116a0 a2=0 a3=1 items=0 ppid=3812 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:26:15.881000 audit[3941]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.881000 audit[3941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd0c13270 a2=0 a3=1 items=0 ppid=3812 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 17:26:15.882000 audit[3942]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.882000 audit[3942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdb7d8320 a2=0 a3=1 items=0 ppid=3812 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:26:15.884000 audit[3944]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.884000 audit[3944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe04335d0 a2=0 a3=1 items=0 ppid=3812 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:15.887000 audit[3947]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.887000 audit[3947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd316d830 a2=0 a3=1 items=0 ppid=3812 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:15.888000 audit[3948]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.888000 audit[3948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd20029f0 a2=0 a3=1 items=0 ppid=3812 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:26:15.890000 audit[3950]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.890000 audit[3950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdd2ee950 a2=0 a3=1 items=0 ppid=3812 pid=3950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:26:15.891000 audit[3951]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.891000 audit[3951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe30f9bb0 a2=0 a3=1 items=0 ppid=3812 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:26:15.893000 audit[3953]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.893000 audit[3953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc6ddde70 a2=0 a3=1 items=0 ppid=3812 pid=3953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:15.896000 audit[3956]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:15.896000 audit[3956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff6e50d20 a2=0 a3=1 items=0 ppid=3812 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:15.900000 audit[3958]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:26:15.900000 audit[3958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff9cd3520 a2=0 a3=1 items=0 ppid=3812 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.900000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:15.901000 audit[3958]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:26:15.901000 audit[3958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff9cd3520 a2=0 a3=1 items=0 ppid=3812 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:15.901000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:17.063411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856017983.mount: Deactivated successfully. Dec 12 17:26:18.768738 kubelet[3664]: I1212 17:26:18.768197 3664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ggwql" podStartSLOduration=4.768179982 podStartE2EDuration="4.768179982s" podCreationTimestamp="2025-12-12 17:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:15.955159368 +0000 UTC m=+8.128432005" watchObservedRunningTime="2025-12-12 17:26:18.768179982 +0000 UTC m=+10.941452619" Dec 12 17:26:23.359383 containerd[2108]: time="2025-12-12T17:26:23.358892048Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:23.363951 containerd[2108]: time="2025-12-12T17:26:23.363893792Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20774174" Dec 12 17:26:23.367875 containerd[2108]: time="2025-12-12T17:26:23.367803408Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:23.372699 containerd[2108]: time="2025-12-12T17:26:23.372629860Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:23.373266 containerd[2108]: time="2025-12-12T17:26:23.372975580Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 7.956244593s" Dec 12 17:26:23.373266 containerd[2108]: time="2025-12-12T17:26:23.373005892Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:26:23.376203 containerd[2108]: time="2025-12-12T17:26:23.376173435Z" level=info msg="CreateContainer within sandbox \"981a1ec7d8fa3f18366c0d4eae17c37b0e59e40552ffb9f6d6ff13867db4057d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:26:23.402212 containerd[2108]: time="2025-12-12T17:26:23.402170786Z" level=info msg="Container 35ed22a0a0f78229229179a8a53ebdbe19694f71fd148aadacbfc01a4ac4edb2: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:23.418990 containerd[2108]: time="2025-12-12T17:26:23.418898688Z" level=info msg="CreateContainer within sandbox \"981a1ec7d8fa3f18366c0d4eae17c37b0e59e40552ffb9f6d6ff13867db4057d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"35ed22a0a0f78229229179a8a53ebdbe19694f71fd148aadacbfc01a4ac4edb2\"" Dec 12 17:26:23.419506 containerd[2108]: time="2025-12-12T17:26:23.419481749Z" level=info msg="StartContainer for \"35ed22a0a0f78229229179a8a53ebdbe19694f71fd148aadacbfc01a4ac4edb2\"" Dec 12 17:26:23.420375 containerd[2108]: time="2025-12-12T17:26:23.420346177Z" level=info msg="connecting to shim 35ed22a0a0f78229229179a8a53ebdbe19694f71fd148aadacbfc01a4ac4edb2" address="unix:///run/containerd/s/fa4ed51c081bececcebb621fd4bf01b32188b043d5318705a57f3d93b429deed" protocol=ttrpc version=3 Dec 12 17:26:23.442025 systemd[1]: Started cri-containerd-35ed22a0a0f78229229179a8a53ebdbe19694f71fd148aadacbfc01a4ac4edb2.scope - libcontainer container 35ed22a0a0f78229229179a8a53ebdbe19694f71fd148aadacbfc01a4ac4edb2. Dec 12 17:26:23.450000 audit: BPF prog-id=170 op=LOAD Dec 12 17:26:23.454295 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 17:26:23.454341 kernel: audit: type=1334 audit(1765560383.450:541): prog-id=170 op=LOAD Dec 12 17:26:23.458000 audit: BPF prog-id=171 op=LOAD Dec 12 17:26:23.463367 kernel: audit: type=1334 audit(1765560383.458:542): prog-id=171 op=LOAD Dec 12 17:26:23.458000 audit[3969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.482702 kernel: audit: type=1300 audit(1765560383.458:542): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.503130 kernel: audit: type=1327 audit(1765560383.458:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.458000 audit: BPF prog-id=171 op=UNLOAD Dec 12 17:26:23.507885 kernel: audit: type=1334 audit(1765560383.458:543): prog-id=171 op=UNLOAD Dec 12 17:26:23.507957 kernel: audit: type=1300 audit(1765560383.458:543): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.458000 audit[3969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.542696 kernel: audit: type=1327 audit(1765560383.458:543): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.458000 audit: BPF prog-id=172 op=LOAD Dec 12 17:26:23.547891 kernel: audit: type=1334 audit(1765560383.458:544): prog-id=172 op=LOAD Dec 12 17:26:23.458000 audit[3969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.565692 kernel: audit: type=1300 audit(1765560383.458:544): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.462000 audit: BPF prog-id=173 op=LOAD Dec 12 17:26:23.462000 audit[3969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.462000 audit: BPF prog-id=173 op=UNLOAD Dec 12 17:26:23.462000 audit[3969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.462000 audit: BPF prog-id=172 op=UNLOAD Dec 12 17:26:23.462000 audit[3969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.462000 audit: BPF prog-id=174 op=LOAD Dec 12 17:26:23.462000 audit[3969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3756 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:23.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.586298 kernel: audit: type=1327 audit(1765560383.458:544): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656432326130613066373832323932323931373961386135336562 Dec 12 17:26:23.598161 containerd[2108]: time="2025-12-12T17:26:23.598132054Z" level=info msg="StartContainer for \"35ed22a0a0f78229229179a8a53ebdbe19694f71fd148aadacbfc01a4ac4edb2\" returns successfully" Dec 12 17:26:23.972647 kubelet[3664]: I1212 17:26:23.972514 3664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-v4jgn" podStartSLOduration=2.013012894 podStartE2EDuration="9.972292066s" podCreationTimestamp="2025-12-12 17:26:14 +0000 UTC" firstStartedPulling="2025-12-12 17:26:15.41433827 +0000 UTC m=+7.587610907" lastFinishedPulling="2025-12-12 17:26:23.373617442 +0000 UTC m=+15.546890079" observedRunningTime="2025-12-12 17:26:23.972164424 +0000 UTC m=+16.145437053" watchObservedRunningTime="2025-12-12 17:26:23.972292066 +0000 UTC m=+16.145564703" Dec 12 17:26:28.650328 sudo[2585]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:28.670387 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 12 17:26:28.670458 kernel: audit: type=1106 audit(1765560388.649:549): pid=2585 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:28.649000 audit[2585]: USER_END pid=2585 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:28.649000 audit[2585]: CRED_DISP pid=2585 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:28.686136 kernel: audit: type=1104 audit(1765560388.649:550): pid=2585 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:28.725327 sshd[2584]: Connection closed by 10.200.16.10 port 40022 Dec 12 17:26:28.723985 sshd-session[2581]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:28.724000 audit[2581]: USER_END pid=2581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:26:28.730055 systemd-logind[2072]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:26:28.724000 audit[2581]: CRED_DISP pid=2581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:26:28.750785 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:40022.service: Deactivated successfully. Dec 12 17:26:28.756182 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:26:28.757601 systemd[1]: session-9.scope: Consumed 3.003s CPU time, 214.9M memory peak. Dec 12 17:26:28.762841 systemd-logind[2072]: Removed session 9. Dec 12 17:26:28.768282 kernel: audit: type=1106 audit(1765560388.724:551): pid=2581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:26:28.768354 kernel: audit: type=1104 audit(1765560388.724:552): pid=2581 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:26:28.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.11:22-10.200.16.10:40022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:28.786006 kernel: audit: type=1131 audit(1765560388.750:553): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.11:22-10.200.16.10:40022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:30.210000 audit[4049]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:30.210000 audit[4049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd20aabe0 a2=0 a3=1 items=0 ppid=3812 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.247497 kernel: audit: type=1325 audit(1765560390.210:554): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:30.247606 kernel: audit: type=1300 audit(1765560390.210:554): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd20aabe0 a2=0 a3=1 items=0 ppid=3812 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:30.258962 kernel: audit: type=1327 audit(1765560390.210:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:30.253000 audit[4049]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:30.253000 audit[4049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd20aabe0 a2=0 a3=1 items=0 ppid=3812 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.295318 kernel: audit: type=1325 audit(1765560390.253:555): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:30.295394 kernel: audit: type=1300 audit(1765560390.253:555): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd20aabe0 a2=0 a3=1 items=0 ppid=3812 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:30.295000 audit[4051]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:30.295000 audit[4051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff7841270 a2=0 a3=1 items=0 ppid=3812 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:30.305000 audit[4051]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:30.305000 audit[4051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff7841270 a2=0 a3=1 items=0 ppid=3812 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:30.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:33.461000 audit[4054]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:33.461000 audit[4054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff688faa0 a2=0 a3=1 items=0 ppid=3812 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:33.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:33.465000 audit[4054]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:33.465000 audit[4054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff688faa0 a2=0 a3=1 items=0 ppid=3812 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:33.465000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:33.474000 audit[4056]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:33.474000 audit[4056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc37befe0 a2=0 a3=1 items=0 ppid=3812 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:33.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:33.481000 audit[4056]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:33.481000 audit[4056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc37befe0 a2=0 a3=1 items=0 ppid=3812 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:33.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:34.495000 audit[4062]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:34.500862 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 12 17:26:34.500934 kernel: audit: type=1325 audit(1765560394.495:562): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:34.495000 audit[4062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffe2ea230 a2=0 a3=1 items=0 ppid=3812 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:34.530097 kernel: audit: type=1300 audit(1765560394.495:562): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffe2ea230 a2=0 a3=1 items=0 ppid=3812 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:34.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:34.541199 kernel: audit: type=1327 audit(1765560394.495:562): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:34.530000 audit[4062]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:34.552157 kernel: audit: type=1325 audit(1765560394.530:563): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:34.530000 audit[4062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe2ea230 a2=0 a3=1 items=0 ppid=3812 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:34.573249 kernel: audit: type=1300 audit(1765560394.530:563): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe2ea230 a2=0 a3=1 items=0 ppid=3812 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:34.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:34.585000 kernel: audit: type=1327 audit(1765560394.530:563): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:35.136620 systemd[1]: Created slice kubepods-besteffort-pod290e25e1_7b1f_4b1b_8bf8_050d6b3e58a3.slice - libcontainer container kubepods-besteffort-pod290e25e1_7b1f_4b1b_8bf8_050d6b3e58a3.slice. Dec 12 17:26:35.189990 kubelet[3664]: I1212 17:26:35.189930 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbchg\" (UniqueName: \"kubernetes.io/projected/290e25e1-7b1f-4b1b-8bf8-050d6b3e58a3-kube-api-access-mbchg\") pod \"calico-typha-5bccb8599c-c45qt\" (UID: \"290e25e1-7b1f-4b1b-8bf8-050d6b3e58a3\") " pod="calico-system/calico-typha-5bccb8599c-c45qt" Dec 12 17:26:35.189990 kubelet[3664]: I1212 17:26:35.189990 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/290e25e1-7b1f-4b1b-8bf8-050d6b3e58a3-tigera-ca-bundle\") pod \"calico-typha-5bccb8599c-c45qt\" (UID: \"290e25e1-7b1f-4b1b-8bf8-050d6b3e58a3\") " pod="calico-system/calico-typha-5bccb8599c-c45qt" Dec 12 17:26:35.190440 kubelet[3664]: I1212 17:26:35.190008 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/290e25e1-7b1f-4b1b-8bf8-050d6b3e58a3-typha-certs\") pod \"calico-typha-5bccb8599c-c45qt\" (UID: \"290e25e1-7b1f-4b1b-8bf8-050d6b3e58a3\") " pod="calico-system/calico-typha-5bccb8599c-c45qt" Dec 12 17:26:35.322588 systemd[1]: Created slice kubepods-besteffort-pod4e13e832_7cad_40cc_b328_03ea9680f15f.slice - libcontainer container kubepods-besteffort-pod4e13e832_7cad_40cc_b328_03ea9680f15f.slice. Dec 12 17:26:35.392283 kubelet[3664]: I1212 17:26:35.391900 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e13e832-7cad-40cc-b328-03ea9680f15f-var-lib-calico\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392283 kubelet[3664]: I1212 17:26:35.392218 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4e13e832-7cad-40cc-b328-03ea9680f15f-cni-net-dir\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392658 kubelet[3664]: I1212 17:26:35.392251 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e13e832-7cad-40cc-b328-03ea9680f15f-xtables-lock\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392658 kubelet[3664]: I1212 17:26:35.392552 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4e13e832-7cad-40cc-b328-03ea9680f15f-node-certs\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392658 kubelet[3664]: I1212 17:26:35.392569 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4e13e832-7cad-40cc-b328-03ea9680f15f-var-run-calico\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392658 kubelet[3664]: I1212 17:26:35.392597 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e13e832-7cad-40cc-b328-03ea9680f15f-lib-modules\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392658 kubelet[3664]: I1212 17:26:35.392607 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4e13e832-7cad-40cc-b328-03ea9680f15f-policysync\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392780 kubelet[3664]: I1212 17:26:35.392626 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4e13e832-7cad-40cc-b328-03ea9680f15f-cni-log-dir\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392944 kubelet[3664]: I1212 17:26:35.392869 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4e13e832-7cad-40cc-b328-03ea9680f15f-cni-bin-dir\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392944 kubelet[3664]: I1212 17:26:35.392911 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958dj\" (UniqueName: \"kubernetes.io/projected/4e13e832-7cad-40cc-b328-03ea9680f15f-kube-api-access-958dj\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.392944 kubelet[3664]: I1212 17:26:35.392926 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e13e832-7cad-40cc-b328-03ea9680f15f-tigera-ca-bundle\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.393148 kubelet[3664]: I1212 17:26:35.393082 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4e13e832-7cad-40cc-b328-03ea9680f15f-flexvol-driver-host\") pod \"calico-node-wc48d\" (UID: \"4e13e832-7cad-40cc-b328-03ea9680f15f\") " pod="calico-system/calico-node-wc48d" Dec 12 17:26:35.445526 containerd[2108]: time="2025-12-12T17:26:35.445424520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bccb8599c-c45qt,Uid:290e25e1-7b1f-4b1b-8bf8-050d6b3e58a3,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:35.497522 kubelet[3664]: E1212 17:26:35.497449 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.497522 kubelet[3664]: W1212 17:26:35.497470 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.497522 kubelet[3664]: E1212 17:26:35.497507 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.498040 kubelet[3664]: E1212 17:26:35.497681 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.498040 kubelet[3664]: W1212 17:26:35.497697 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.498040 kubelet[3664]: E1212 17:26:35.497708 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.498040 kubelet[3664]: E1212 17:26:35.497850 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.498040 kubelet[3664]: W1212 17:26:35.497957 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.498040 kubelet[3664]: E1212 17:26:35.497972 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.498633 kubelet[3664]: E1212 17:26:35.498245 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.498633 kubelet[3664]: W1212 17:26:35.498360 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.498633 kubelet[3664]: E1212 17:26:35.498372 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.499347 kubelet[3664]: E1212 17:26:35.499005 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.499347 kubelet[3664]: W1212 17:26:35.499125 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.499347 kubelet[3664]: E1212 17:26:35.499275 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.499871 kubelet[3664]: E1212 17:26:35.499796 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.499871 kubelet[3664]: W1212 17:26:35.499809 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.500720 kubelet[3664]: E1212 17:26:35.500684 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.504979 kubelet[3664]: E1212 17:26:35.504959 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.504979 kubelet[3664]: W1212 17:26:35.504974 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.504979 kubelet[3664]: E1212 17:26:35.505019 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.506090 kubelet[3664]: E1212 17:26:35.505837 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.506769 kubelet[3664]: W1212 17:26:35.505851 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.507397 kubelet[3664]: E1212 17:26:35.506866 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.507397 kubelet[3664]: W1212 17:26:35.506877 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.507397 kubelet[3664]: E1212 17:26:35.506973 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.507397 kubelet[3664]: W1212 17:26:35.506978 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.507397 kubelet[3664]: E1212 17:26:35.507036 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.507397 kubelet[3664]: E1212 17:26:35.507054 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.507397 kubelet[3664]: W1212 17:26:35.507059 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.507397 kubelet[3664]: E1212 17:26:35.507061 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.507397 kubelet[3664]: E1212 17:26:35.507067 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.507397 kubelet[3664]: E1212 17:26:35.507077 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.508986 containerd[2108]: time="2025-12-12T17:26:35.507270062Z" level=info msg="connecting to shim 0c50e3f19ef6e8f82521e0d13b5098c89b463ef5c719daa9a0db751dae8ec438" address="unix:///run/containerd/s/5de15b190ff78ac22d3dd43bc2bbdd1ea13e5f37b347310e5b64db8b4ee09cd9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:35.509404 kubelet[3664]: E1212 17:26:35.509305 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.510693 kubelet[3664]: W1212 17:26:35.510575 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.510781 kubelet[3664]: E1212 17:26:35.510695 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.511950 kubelet[3664]: E1212 17:26:35.511677 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.511950 kubelet[3664]: W1212 17:26:35.511693 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.511950 kubelet[3664]: E1212 17:26:35.511705 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.513379 kubelet[3664]: E1212 17:26:35.512383 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.513379 kubelet[3664]: W1212 17:26:35.512400 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.513379 kubelet[3664]: E1212 17:26:35.512419 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.514141 kubelet[3664]: E1212 17:26:35.514036 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.514141 kubelet[3664]: W1212 17:26:35.514057 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.514141 kubelet[3664]: E1212 17:26:35.514071 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.516104 kubelet[3664]: E1212 17:26:35.516085 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.516622 kubelet[3664]: W1212 17:26:35.516404 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.518199 kubelet[3664]: E1212 17:26:35.518134 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.519024 kubelet[3664]: W1212 17:26:35.518783 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.519024 kubelet[3664]: E1212 17:26:35.518816 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.519718 kubelet[3664]: E1212 17:26:35.519554 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.522029 kubelet[3664]: E1212 17:26:35.521784 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.522029 kubelet[3664]: W1212 17:26:35.521802 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.522029 kubelet[3664]: E1212 17:26:35.521820 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.524292 kubelet[3664]: E1212 17:26:35.524003 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.525097 kubelet[3664]: W1212 17:26:35.524952 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.525097 kubelet[3664]: E1212 17:26:35.524977 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.528931 kubelet[3664]: E1212 17:26:35.528813 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.528931 kubelet[3664]: W1212 17:26:35.528829 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.528931 kubelet[3664]: E1212 17:26:35.528841 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.535314 kubelet[3664]: E1212 17:26:35.534052 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:26:35.555121 systemd[1]: Started cri-containerd-0c50e3f19ef6e8f82521e0d13b5098c89b463ef5c719daa9a0db751dae8ec438.scope - libcontainer container 0c50e3f19ef6e8f82521e0d13b5098c89b463ef5c719daa9a0db751dae8ec438. Dec 12 17:26:35.559975 kubelet[3664]: E1212 17:26:35.559946 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.559975 kubelet[3664]: W1212 17:26:35.559965 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.560069 kubelet[3664]: E1212 17:26:35.559984 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.560154 kubelet[3664]: E1212 17:26:35.560137 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.560201 kubelet[3664]: W1212 17:26:35.560148 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.560201 kubelet[3664]: E1212 17:26:35.560193 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.560350 kubelet[3664]: E1212 17:26:35.560336 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.560350 kubelet[3664]: W1212 17:26:35.560347 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.560411 kubelet[3664]: E1212 17:26:35.560357 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.560518 kubelet[3664]: E1212 17:26:35.560497 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.560518 kubelet[3664]: W1212 17:26:35.560509 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.560518 kubelet[3664]: E1212 17:26:35.560517 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.560672 kubelet[3664]: E1212 17:26:35.560653 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.560672 kubelet[3664]: W1212 17:26:35.560663 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.560672 kubelet[3664]: E1212 17:26:35.560671 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.560805 kubelet[3664]: E1212 17:26:35.560778 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.560805 kubelet[3664]: W1212 17:26:35.560798 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.560805 kubelet[3664]: E1212 17:26:35.560806 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.560959 kubelet[3664]: E1212 17:26:35.560934 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.560959 kubelet[3664]: W1212 17:26:35.560956 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.561026 kubelet[3664]: E1212 17:26:35.560965 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.561115 kubelet[3664]: E1212 17:26:35.561098 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.561115 kubelet[3664]: W1212 17:26:35.561110 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.561165 kubelet[3664]: E1212 17:26:35.561118 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.561297 kubelet[3664]: E1212 17:26:35.561283 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.561297 kubelet[3664]: W1212 17:26:35.561293 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.561360 kubelet[3664]: E1212 17:26:35.561301 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.561434 kubelet[3664]: E1212 17:26:35.561419 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.561434 kubelet[3664]: W1212 17:26:35.561430 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.561502 kubelet[3664]: E1212 17:26:35.561439 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.561547 kubelet[3664]: E1212 17:26:35.561528 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.561547 kubelet[3664]: W1212 17:26:35.561536 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.561589 kubelet[3664]: E1212 17:26:35.561557 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.561662 kubelet[3664]: E1212 17:26:35.561649 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.561662 kubelet[3664]: W1212 17:26:35.561658 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.561662 kubelet[3664]: E1212 17:26:35.561664 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.561799 kubelet[3664]: E1212 17:26:35.561784 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.561799 kubelet[3664]: W1212 17:26:35.561793 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.561799 kubelet[3664]: E1212 17:26:35.561799 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.562026 kubelet[3664]: E1212 17:26:35.562011 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.562026 kubelet[3664]: W1212 17:26:35.562020 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.562026 kubelet[3664]: E1212 17:26:35.562028 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.562144 kubelet[3664]: E1212 17:26:35.562130 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.562144 kubelet[3664]: W1212 17:26:35.562137 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.562144 kubelet[3664]: E1212 17:26:35.562143 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.562280 kubelet[3664]: E1212 17:26:35.562267 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.562280 kubelet[3664]: W1212 17:26:35.562276 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.562325 kubelet[3664]: E1212 17:26:35.562283 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.562427 kubelet[3664]: E1212 17:26:35.562409 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.562427 kubelet[3664]: W1212 17:26:35.562418 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.562427 kubelet[3664]: E1212 17:26:35.562425 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.562557 kubelet[3664]: E1212 17:26:35.562534 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.562557 kubelet[3664]: W1212 17:26:35.562544 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.562606 kubelet[3664]: E1212 17:26:35.562562 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.562722 kubelet[3664]: E1212 17:26:35.562705 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.562722 kubelet[3664]: W1212 17:26:35.562716 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.562786 kubelet[3664]: E1212 17:26:35.562724 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.563475 kubelet[3664]: E1212 17:26:35.563450 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.563475 kubelet[3664]: W1212 17:26:35.563471 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.563659 kubelet[3664]: E1212 17:26:35.563482 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.567000 audit[4156]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:35.567000 audit[4156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdae36620 a2=0 a3=1 items=0 ppid=3812 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.598887 kernel: audit: type=1325 audit(1765560395.567:564): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:35.598967 kernel: audit: type=1300 audit(1765560395.567:564): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdae36620 a2=0 a3=1 items=0 ppid=3812 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.602788 kubelet[3664]: E1212 17:26:35.602586 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.602788 kubelet[3664]: W1212 17:26:35.602627 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.602788 kubelet[3664]: E1212 17:26:35.602647 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.602788 kubelet[3664]: I1212 17:26:35.602672 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b-socket-dir\") pod \"csi-node-driver-trqfx\" (UID: \"b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b\") " pod="calico-system/csi-node-driver-trqfx" Dec 12 17:26:35.602985 kubelet[3664]: E1212 17:26:35.602948 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.602985 kubelet[3664]: W1212 17:26:35.602960 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.602985 kubelet[3664]: E1212 17:26:35.602974 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.603033 kubelet[3664]: I1212 17:26:35.602987 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tt2x\" (UniqueName: \"kubernetes.io/projected/b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b-kube-api-access-2tt2x\") pod \"csi-node-driver-trqfx\" (UID: \"b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b\") " pod="calico-system/csi-node-driver-trqfx" Dec 12 17:26:35.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:35.603270 kubelet[3664]: E1212 17:26:35.603123 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.603270 kubelet[3664]: W1212 17:26:35.603136 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.603270 kubelet[3664]: E1212 17:26:35.603144 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.603270 kubelet[3664]: I1212 17:26:35.603156 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b-registration-dir\") pod \"csi-node-driver-trqfx\" (UID: \"b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b\") " pod="calico-system/csi-node-driver-trqfx" Dec 12 17:26:35.611934 kernel: audit: type=1327 audit(1765560395.567:564): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:35.612522 kubelet[3664]: E1212 17:26:35.612379 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.612522 kubelet[3664]: W1212 17:26:35.612394 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.612522 kubelet[3664]: E1212 17:26:35.612423 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.613198 kubelet[3664]: E1212 17:26:35.613062 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.613198 kubelet[3664]: W1212 17:26:35.613126 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.613198 kubelet[3664]: E1212 17:26:35.613161 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.614032 kubelet[3664]: E1212 17:26:35.613934 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.614032 kubelet[3664]: W1212 17:26:35.613966 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.614032 kubelet[3664]: E1212 17:26:35.613998 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.614356 kubelet[3664]: E1212 17:26:35.614302 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.614356 kubelet[3664]: W1212 17:26:35.614314 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.616007 kubelet[3664]: E1212 17:26:35.615986 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.616007 kubelet[3664]: I1212 17:26:35.616013 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b-kubelet-dir\") pod \"csi-node-driver-trqfx\" (UID: \"b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b\") " pod="calico-system/csi-node-driver-trqfx" Dec 12 17:26:35.616356 kubelet[3664]: E1212 17:26:35.616118 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.616356 kubelet[3664]: W1212 17:26:35.616130 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.616356 kubelet[3664]: E1212 17:26:35.616156 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.616827 kubelet[3664]: E1212 17:26:35.616640 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.616827 kubelet[3664]: W1212 17:26:35.616679 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.616827 kubelet[3664]: E1212 17:26:35.616692 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.617150 kubelet[3664]: E1212 17:26:35.617077 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.617150 kubelet[3664]: W1212 17:26:35.617090 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.617150 kubelet[3664]: E1212 17:26:35.617109 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.617150 kubelet[3664]: I1212 17:26:35.617126 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b-varrun\") pod \"csi-node-driver-trqfx\" (UID: \"b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b\") " pod="calico-system/csi-node-driver-trqfx" Dec 12 17:26:35.617279 kubelet[3664]: E1212 17:26:35.617258 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.617309 kubelet[3664]: W1212 17:26:35.617280 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.617309 kubelet[3664]: E1212 17:26:35.617295 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.617438 kubelet[3664]: E1212 17:26:35.617424 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.617438 kubelet[3664]: W1212 17:26:35.617433 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.617494 kubelet[3664]: E1212 17:26:35.617442 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.617651 kubelet[3664]: E1212 17:26:35.617623 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.617651 kubelet[3664]: W1212 17:26:35.617645 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.617651 kubelet[3664]: E1212 17:26:35.617657 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.617769 kubelet[3664]: E1212 17:26:35.617758 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.617769 kubelet[3664]: W1212 17:26:35.617763 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.617769 kubelet[3664]: E1212 17:26:35.617769 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.618196 kubelet[3664]: E1212 17:26:35.617917 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.618196 kubelet[3664]: W1212 17:26:35.617926 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.618196 kubelet[3664]: E1212 17:26:35.617934 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.598000 audit[4156]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:35.626753 kernel: audit: type=1325 audit(1765560395.598:565): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:35.632932 containerd[2108]: time="2025-12-12T17:26:35.632887643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wc48d,Uid:4e13e832-7cad-40cc-b328-03ea9680f15f,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:35.598000 audit[4156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdae36620 a2=0 a3=1 items=0 ppid=3812 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:35.599000 audit: BPF prog-id=175 op=LOAD Dec 12 17:26:35.601000 audit: BPF prog-id=176 op=LOAD Dec 12 17:26:35.601000 audit[4107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4078 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063353065336631396566366538663832353231653064313362353039 Dec 12 17:26:35.602000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:26:35.602000 audit[4107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063353065336631396566366538663832353231653064313362353039 Dec 12 17:26:35.602000 audit: BPF prog-id=177 op=LOAD Dec 12 17:26:35.602000 audit[4107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4078 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063353065336631396566366538663832353231653064313362353039 Dec 12 17:26:35.611000 audit: BPF prog-id=178 op=LOAD Dec 12 17:26:35.611000 audit[4107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4078 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063353065336631396566366538663832353231653064313362353039 Dec 12 17:26:35.611000 audit: BPF prog-id=178 op=UNLOAD Dec 12 17:26:35.611000 audit[4107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063353065336631396566366538663832353231653064313362353039 Dec 12 17:26:35.611000 audit: BPF prog-id=177 op=UNLOAD Dec 12 17:26:35.611000 audit[4107]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063353065336631396566366538663832353231653064313362353039 Dec 12 17:26:35.611000 audit: BPF prog-id=179 op=LOAD Dec 12 17:26:35.611000 audit[4107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4078 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063353065336631396566366538663832353231653064313362353039 Dec 12 17:26:35.667918 containerd[2108]: time="2025-12-12T17:26:35.667263103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bccb8599c-c45qt,Uid:290e25e1-7b1f-4b1b-8bf8-050d6b3e58a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c50e3f19ef6e8f82521e0d13b5098c89b463ef5c719daa9a0db751dae8ec438\"" Dec 12 17:26:35.671170 containerd[2108]: time="2025-12-12T17:26:35.671134274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:26:35.696372 containerd[2108]: time="2025-12-12T17:26:35.696299650Z" level=info msg="connecting to shim 1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994" address="unix:///run/containerd/s/d4aeee1ecd316b1657393bd07df09e8aeb587297d0d2851882bfb960a0268635" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:35.721290 kubelet[3664]: E1212 17:26:35.721249 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.721290 kubelet[3664]: W1212 17:26:35.721277 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.721674 kubelet[3664]: E1212 17:26:35.721302 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.721674 kubelet[3664]: E1212 17:26:35.721514 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.721674 kubelet[3664]: W1212 17:26:35.721523 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.721674 kubelet[3664]: E1212 17:26:35.721538 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.722050 kubelet[3664]: E1212 17:26:35.722035 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.722839 kubelet[3664]: W1212 17:26:35.722786 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.722839 kubelet[3664]: E1212 17:26:35.722818 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.724007 kubelet[3664]: E1212 17:26:35.722982 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.724007 kubelet[3664]: W1212 17:26:35.722998 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.724007 kubelet[3664]: E1212 17:26:35.723013 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.725015 kubelet[3664]: E1212 17:26:35.724832 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.726050 kubelet[3664]: W1212 17:26:35.725190 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.726050 kubelet[3664]: E1212 17:26:35.725215 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.726383 kubelet[3664]: E1212 17:26:35.726195 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.726487 kubelet[3664]: W1212 17:26:35.726472 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.726617 kubelet[3664]: E1212 17:26:35.726574 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.727076 kubelet[3664]: E1212 17:26:35.727060 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.727184 kubelet[3664]: W1212 17:26:35.727172 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.727927 kubelet[3664]: E1212 17:26:35.727245 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.728362 kubelet[3664]: E1212 17:26:35.728298 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.728362 kubelet[3664]: W1212 17:26:35.728312 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.728362 kubelet[3664]: E1212 17:26:35.728339 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.728978 kubelet[3664]: E1212 17:26:35.728962 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.729975 kubelet[3664]: W1212 17:26:35.729908 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.729975 kubelet[3664]: E1212 17:26:35.729952 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.730424 kubelet[3664]: E1212 17:26:35.730358 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.730424 kubelet[3664]: W1212 17:26:35.730376 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.730605 kubelet[3664]: E1212 17:26:35.730500 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.730826 systemd[1]: Started cri-containerd-1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994.scope - libcontainer container 1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994. Dec 12 17:26:35.731775 kubelet[3664]: E1212 17:26:35.731614 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.731775 kubelet[3664]: W1212 17:26:35.731625 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.732968 kubelet[3664]: E1212 17:26:35.731896 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.733289 kubelet[3664]: E1212 17:26:35.733274 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.733437 kubelet[3664]: W1212 17:26:35.733311 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.733437 kubelet[3664]: E1212 17:26:35.733340 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.733888 kubelet[3664]: E1212 17:26:35.733872 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.734024 kubelet[3664]: W1212 17:26:35.733951 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.734024 kubelet[3664]: E1212 17:26:35.733996 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.734370 kubelet[3664]: E1212 17:26:35.734246 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.734370 kubelet[3664]: W1212 17:26:35.734261 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.734370 kubelet[3664]: E1212 17:26:35.734283 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.734556 kubelet[3664]: E1212 17:26:35.734520 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.734556 kubelet[3664]: W1212 17:26:35.734533 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.734736 kubelet[3664]: E1212 17:26:35.734643 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.735937 kubelet[3664]: E1212 17:26:35.735911 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.735937 kubelet[3664]: W1212 17:26:35.735929 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.735937 kubelet[3664]: E1212 17:26:35.735969 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.736104 kubelet[3664]: E1212 17:26:35.736068 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.736104 kubelet[3664]: W1212 17:26:35.736076 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.736905 kubelet[3664]: E1212 17:26:35.736146 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.736905 kubelet[3664]: E1212 17:26:35.736187 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.736905 kubelet[3664]: W1212 17:26:35.736191 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.736905 kubelet[3664]: E1212 17:26:35.736268 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.736905 kubelet[3664]: E1212 17:26:35.736297 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.736905 kubelet[3664]: W1212 17:26:35.736301 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.736905 kubelet[3664]: E1212 17:26:35.736371 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.736905 kubelet[3664]: E1212 17:26:35.736518 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.736905 kubelet[3664]: W1212 17:26:35.736526 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.736905 kubelet[3664]: E1212 17:26:35.736534 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.737318 kubelet[3664]: E1212 17:26:35.736626 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.737318 kubelet[3664]: W1212 17:26:35.736630 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.737318 kubelet[3664]: E1212 17:26:35.736635 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.737318 kubelet[3664]: E1212 17:26:35.736765 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.737318 kubelet[3664]: W1212 17:26:35.736771 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.737318 kubelet[3664]: E1212 17:26:35.736781 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.737318 kubelet[3664]: E1212 17:26:35.736888 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.737318 kubelet[3664]: W1212 17:26:35.736894 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.737318 kubelet[3664]: E1212 17:26:35.736904 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.737318 kubelet[3664]: E1212 17:26:35.737020 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.737468 kubelet[3664]: W1212 17:26:35.737025 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.737468 kubelet[3664]: E1212 17:26:35.737034 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.739035 kubelet[3664]: E1212 17:26:35.738978 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.739035 kubelet[3664]: W1212 17:26:35.738994 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.739035 kubelet[3664]: E1212 17:26:35.739009 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.751958 kubelet[3664]: E1212 17:26:35.751936 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:35.751958 kubelet[3664]: W1212 17:26:35.751952 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:35.752055 kubelet[3664]: E1212 17:26:35.751969 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:35.770000 audit: BPF prog-id=180 op=LOAD Dec 12 17:26:35.770000 audit: BPF prog-id=181 op=LOAD Dec 12 17:26:35.770000 audit[4200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4189 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161636366616233343033323637376362313936376630393631636230 Dec 12 17:26:35.771000 audit: BPF prog-id=181 op=UNLOAD Dec 12 17:26:35.771000 audit[4200]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161636366616233343033323637376362313936376630393631636230 Dec 12 17:26:35.771000 audit: BPF prog-id=182 op=LOAD Dec 12 17:26:35.771000 audit[4200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4189 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161636366616233343033323637376362313936376630393631636230 Dec 12 17:26:35.771000 audit: BPF prog-id=183 op=LOAD Dec 12 17:26:35.771000 audit[4200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4189 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161636366616233343033323637376362313936376630393631636230 Dec 12 17:26:35.771000 audit: BPF prog-id=183 op=UNLOAD Dec 12 17:26:35.771000 audit[4200]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161636366616233343033323637376362313936376630393631636230 Dec 12 17:26:35.771000 audit: BPF prog-id=182 op=UNLOAD Dec 12 17:26:35.771000 audit[4200]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161636366616233343033323637376362313936376630393631636230 Dec 12 17:26:35.772000 audit: BPF prog-id=184 op=LOAD Dec 12 17:26:35.772000 audit[4200]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4189 pid=4200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:35.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161636366616233343033323637376362313936376630393631636230 Dec 12 17:26:35.789641 containerd[2108]: time="2025-12-12T17:26:35.789608695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wc48d,Uid:4e13e832-7cad-40cc-b328-03ea9680f15f,Namespace:calico-system,Attempt:0,} returns sandbox id \"1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994\"" Dec 12 17:26:36.897616 kubelet[3664]: E1212 17:26:36.897162 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:26:36.990294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363059994.mount: Deactivated successfully. Dec 12 17:26:37.666602 containerd[2108]: time="2025-12-12T17:26:37.666512617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:37.670195 containerd[2108]: time="2025-12-12T17:26:37.670008066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31717602" Dec 12 17:26:37.673591 containerd[2108]: time="2025-12-12T17:26:37.673538028Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:37.678559 containerd[2108]: time="2025-12-12T17:26:37.678473943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:37.679303 containerd[2108]: time="2025-12-12T17:26:37.679262577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.007964596s" Dec 12 17:26:37.679397 containerd[2108]: time="2025-12-12T17:26:37.679381636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:26:37.680768 containerd[2108]: time="2025-12-12T17:26:37.680736971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:26:37.691004 containerd[2108]: time="2025-12-12T17:26:37.690629697Z" level=info msg="CreateContainer within sandbox \"0c50e3f19ef6e8f82521e0d13b5098c89b463ef5c719daa9a0db751dae8ec438\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:26:37.715929 containerd[2108]: time="2025-12-12T17:26:37.715596117Z" level=info msg="Container 6ddcf24e95e9c15a72a941113b7d3f2e1e04ab45ce7c64932f71a1ff7926a3af: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:37.719530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837527931.mount: Deactivated successfully. Dec 12 17:26:37.747465 containerd[2108]: time="2025-12-12T17:26:37.747423599Z" level=info msg="CreateContainer within sandbox \"0c50e3f19ef6e8f82521e0d13b5098c89b463ef5c719daa9a0db751dae8ec438\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6ddcf24e95e9c15a72a941113b7d3f2e1e04ab45ce7c64932f71a1ff7926a3af\"" Dec 12 17:26:37.748596 containerd[2108]: time="2025-12-12T17:26:37.748541929Z" level=info msg="StartContainer for \"6ddcf24e95e9c15a72a941113b7d3f2e1e04ab45ce7c64932f71a1ff7926a3af\"" Dec 12 17:26:37.749924 containerd[2108]: time="2025-12-12T17:26:37.749823151Z" level=info msg="connecting to shim 6ddcf24e95e9c15a72a941113b7d3f2e1e04ab45ce7c64932f71a1ff7926a3af" address="unix:///run/containerd/s/5de15b190ff78ac22d3dd43bc2bbdd1ea13e5f37b347310e5b64db8b4ee09cd9" protocol=ttrpc version=3 Dec 12 17:26:37.770006 systemd[1]: Started cri-containerd-6ddcf24e95e9c15a72a941113b7d3f2e1e04ab45ce7c64932f71a1ff7926a3af.scope - libcontainer container 6ddcf24e95e9c15a72a941113b7d3f2e1e04ab45ce7c64932f71a1ff7926a3af. Dec 12 17:26:37.778000 audit: BPF prog-id=185 op=LOAD Dec 12 17:26:37.778000 audit: BPF prog-id=186 op=LOAD Dec 12 17:26:37.778000 audit[4263]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4078 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664646366323465393565396331356137326139343131313362376433 Dec 12 17:26:37.778000 audit: BPF prog-id=186 op=UNLOAD Dec 12 17:26:37.778000 audit[4263]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664646366323465393565396331356137326139343131313362376433 Dec 12 17:26:37.778000 audit: BPF prog-id=187 op=LOAD Dec 12 17:26:37.778000 audit[4263]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4078 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664646366323465393565396331356137326139343131313362376433 Dec 12 17:26:37.779000 audit: BPF prog-id=188 op=LOAD Dec 12 17:26:37.779000 audit[4263]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4078 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664646366323465393565396331356137326139343131313362376433 Dec 12 17:26:37.779000 audit: BPF prog-id=188 op=UNLOAD Dec 12 17:26:37.779000 audit[4263]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664646366323465393565396331356137326139343131313362376433 Dec 12 17:26:37.779000 audit: BPF prog-id=187 op=UNLOAD Dec 12 17:26:37.779000 audit[4263]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4078 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664646366323465393565396331356137326139343131313362376433 Dec 12 17:26:37.779000 audit: BPF prog-id=189 op=LOAD Dec 12 17:26:37.779000 audit[4263]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4078 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664646366323465393565396331356137326139343131313362376433 Dec 12 17:26:37.807706 containerd[2108]: time="2025-12-12T17:26:37.807677055Z" level=info msg="StartContainer for \"6ddcf24e95e9c15a72a941113b7d3f2e1e04ab45ce7c64932f71a1ff7926a3af\" returns successfully" Dec 12 17:26:38.017832 kubelet[3664]: I1212 17:26:38.017098 3664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bccb8599c-c45qt" podStartSLOduration=1.006196548 podStartE2EDuration="3.017081776s" podCreationTimestamp="2025-12-12 17:26:35 +0000 UTC" firstStartedPulling="2025-12-12 17:26:35.669640418 +0000 UTC m=+27.842913055" lastFinishedPulling="2025-12-12 17:26:37.680525646 +0000 UTC m=+29.853798283" observedRunningTime="2025-12-12 17:26:38.017033591 +0000 UTC m=+30.190306220" watchObservedRunningTime="2025-12-12 17:26:38.017081776 +0000 UTC m=+30.190354413" Dec 12 17:26:38.079080 kubelet[3664]: E1212 17:26:38.078941 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.079080 kubelet[3664]: W1212 17:26:38.078971 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.079080 kubelet[3664]: E1212 17:26:38.078994 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.079397 kubelet[3664]: E1212 17:26:38.079291 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.079397 kubelet[3664]: W1212 17:26:38.079302 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.079397 kubelet[3664]: E1212 17:26:38.079317 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.079549 kubelet[3664]: E1212 17:26:38.079537 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.079601 kubelet[3664]: W1212 17:26:38.079591 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.079731 kubelet[3664]: E1212 17:26:38.079640 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.079823 kubelet[3664]: E1212 17:26:38.079813 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.079985 kubelet[3664]: W1212 17:26:38.079883 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.079985 kubelet[3664]: E1212 17:26:38.079897 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.080122 kubelet[3664]: E1212 17:26:38.080111 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.080174 kubelet[3664]: W1212 17:26:38.080164 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.080225 kubelet[3664]: E1212 17:26:38.080215 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.080493 kubelet[3664]: E1212 17:26:38.080408 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.080493 kubelet[3664]: W1212 17:26:38.080418 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.080493 kubelet[3664]: E1212 17:26:38.080429 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.080631 kubelet[3664]: E1212 17:26:38.080620 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.080685 kubelet[3664]: W1212 17:26:38.080675 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.080736 kubelet[3664]: E1212 17:26:38.080725 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.082180 kubelet[3664]: E1212 17:26:38.082063 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.082180 kubelet[3664]: W1212 17:26:38.082078 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.082180 kubelet[3664]: E1212 17:26:38.082091 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.082335 kubelet[3664]: E1212 17:26:38.082324 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.082390 kubelet[3664]: W1212 17:26:38.082380 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.082502 kubelet[3664]: E1212 17:26:38.082426 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.082594 kubelet[3664]: E1212 17:26:38.082585 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.082718 kubelet[3664]: W1212 17:26:38.082635 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.082718 kubelet[3664]: E1212 17:26:38.082648 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.082833 kubelet[3664]: E1212 17:26:38.082823 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.082894 kubelet[3664]: W1212 17:26:38.082884 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.083016 kubelet[3664]: E1212 17:26:38.082938 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.083110 kubelet[3664]: E1212 17:26:38.083099 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.083162 kubelet[3664]: W1212 17:26:38.083153 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.083215 kubelet[3664]: E1212 17:26:38.083204 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.084049 kubelet[3664]: E1212 17:26:38.083931 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.084049 kubelet[3664]: W1212 17:26:38.083945 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.084049 kubelet[3664]: E1212 17:26:38.083955 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.085144 kubelet[3664]: E1212 17:26:38.085090 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.085144 kubelet[3664]: W1212 17:26:38.085101 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.085144 kubelet[3664]: E1212 17:26:38.085111 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.085434 kubelet[3664]: E1212 17:26:38.085391 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.085434 kubelet[3664]: W1212 17:26:38.085402 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.085434 kubelet[3664]: E1212 17:26:38.085411 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.148826 kubelet[3664]: E1212 17:26:38.148734 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.148826 kubelet[3664]: W1212 17:26:38.148759 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.148826 kubelet[3664]: E1212 17:26:38.148778 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.149041 kubelet[3664]: E1212 17:26:38.148983 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.149041 kubelet[3664]: W1212 17:26:38.148992 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.149041 kubelet[3664]: E1212 17:26:38.149003 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.149207 kubelet[3664]: E1212 17:26:38.149185 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.149207 kubelet[3664]: W1212 17:26:38.149193 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.149252 kubelet[3664]: E1212 17:26:38.149211 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.149402 kubelet[3664]: E1212 17:26:38.149388 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.149402 kubelet[3664]: W1212 17:26:38.149401 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.149466 kubelet[3664]: E1212 17:26:38.149416 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.149555 kubelet[3664]: E1212 17:26:38.149539 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.149555 kubelet[3664]: W1212 17:26:38.149549 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.149618 kubelet[3664]: E1212 17:26:38.149564 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.149676 kubelet[3664]: E1212 17:26:38.149665 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.149676 kubelet[3664]: W1212 17:26:38.149673 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.149723 kubelet[3664]: E1212 17:26:38.149687 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.149869 kubelet[3664]: E1212 17:26:38.149844 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.149869 kubelet[3664]: W1212 17:26:38.149863 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.149931 kubelet[3664]: E1212 17:26:38.149878 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.150076 kubelet[3664]: E1212 17:26:38.150059 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.150076 kubelet[3664]: W1212 17:26:38.150072 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.150123 kubelet[3664]: E1212 17:26:38.150081 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.150312 kubelet[3664]: E1212 17:26:38.150296 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.150312 kubelet[3664]: W1212 17:26:38.150308 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.150360 kubelet[3664]: E1212 17:26:38.150317 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.150479 kubelet[3664]: E1212 17:26:38.150463 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.150479 kubelet[3664]: W1212 17:26:38.150474 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.150529 kubelet[3664]: E1212 17:26:38.150484 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.150646 kubelet[3664]: E1212 17:26:38.150633 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.150646 kubelet[3664]: W1212 17:26:38.150643 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.150706 kubelet[3664]: E1212 17:26:38.150657 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.150871 kubelet[3664]: E1212 17:26:38.150846 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.150907 kubelet[3664]: W1212 17:26:38.150872 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.150907 kubelet[3664]: E1212 17:26:38.150881 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.151211 kubelet[3664]: E1212 17:26:38.151194 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.151211 kubelet[3664]: W1212 17:26:38.151206 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.151264 kubelet[3664]: E1212 17:26:38.151219 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.151394 kubelet[3664]: E1212 17:26:38.151381 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.151394 kubelet[3664]: W1212 17:26:38.151390 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.151450 kubelet[3664]: E1212 17:26:38.151403 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.151531 kubelet[3664]: E1212 17:26:38.151519 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.151531 kubelet[3664]: W1212 17:26:38.151527 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.151584 kubelet[3664]: E1212 17:26:38.151537 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.151703 kubelet[3664]: E1212 17:26:38.151687 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.151703 kubelet[3664]: W1212 17:26:38.151697 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.151703 kubelet[3664]: E1212 17:26:38.151706 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.151980 kubelet[3664]: E1212 17:26:38.151965 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.151980 kubelet[3664]: W1212 17:26:38.151978 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.152049 kubelet[3664]: E1212 17:26:38.151996 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.152138 kubelet[3664]: E1212 17:26:38.152125 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:38.152138 kubelet[3664]: W1212 17:26:38.152135 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:38.152180 kubelet[3664]: E1212 17:26:38.152142 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:38.897314 kubelet[3664]: E1212 17:26:38.897223 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:26:38.991991 kubelet[3664]: I1212 17:26:38.991957 3664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:26:39.004902 containerd[2108]: time="2025-12-12T17:26:39.004720245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:39.008073 containerd[2108]: time="2025-12-12T17:26:39.008015931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:39.011617 containerd[2108]: time="2025-12-12T17:26:39.011569095Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:39.017561 containerd[2108]: time="2025-12-12T17:26:39.017511158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:39.018209 containerd[2108]: time="2025-12-12T17:26:39.017902007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.336799898s" Dec 12 17:26:39.018209 containerd[2108]: time="2025-12-12T17:26:39.017932495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:26:39.021221 containerd[2108]: time="2025-12-12T17:26:39.021190029Z" level=info msg="CreateContainer within sandbox \"1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:26:39.045601 containerd[2108]: time="2025-12-12T17:26:39.045555494Z" level=info msg="Container 15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:39.071597 containerd[2108]: time="2025-12-12T17:26:39.071469384Z" level=info msg="CreateContainer within sandbox \"1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c\"" Dec 12 17:26:39.072103 containerd[2108]: time="2025-12-12T17:26:39.072070613Z" level=info msg="StartContainer for \"15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c\"" Dec 12 17:26:39.074307 containerd[2108]: time="2025-12-12T17:26:39.074267668Z" level=info msg="connecting to shim 15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c" address="unix:///run/containerd/s/d4aeee1ecd316b1657393bd07df09e8aeb587297d0d2851882bfb960a0268635" protocol=ttrpc version=3 Dec 12 17:26:39.091796 kubelet[3664]: E1212 17:26:39.091643 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.091796 kubelet[3664]: W1212 17:26:39.091668 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.091796 kubelet[3664]: E1212 17:26:39.091689 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.092494 kubelet[3664]: E1212 17:26:39.092226 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.092494 kubelet[3664]: W1212 17:26:39.092240 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.092494 kubelet[3664]: E1212 17:26:39.092348 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.092956 kubelet[3664]: E1212 17:26:39.092880 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.092956 kubelet[3664]: W1212 17:26:39.092896 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.092956 kubelet[3664]: E1212 17:26:39.092908 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.093386 kubelet[3664]: E1212 17:26:39.093339 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.093569 kubelet[3664]: W1212 17:26:39.093451 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.093569 kubelet[3664]: E1212 17:26:39.093468 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.094034 kubelet[3664]: E1212 17:26:39.093749 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.094034 kubelet[3664]: W1212 17:26:39.093764 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.094034 kubelet[3664]: E1212 17:26:39.093776 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.094237 systemd[1]: Started cri-containerd-15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c.scope - libcontainer container 15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c. Dec 12 17:26:39.094731 kubelet[3664]: E1212 17:26:39.094715 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.094883 kubelet[3664]: W1212 17:26:39.094809 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.094883 kubelet[3664]: E1212 17:26:39.094826 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.095239 kubelet[3664]: E1212 17:26:39.095163 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.095239 kubelet[3664]: W1212 17:26:39.095176 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.095239 kubelet[3664]: E1212 17:26:39.095187 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.095564 kubelet[3664]: E1212 17:26:39.095489 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.095564 kubelet[3664]: W1212 17:26:39.095501 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.095564 kubelet[3664]: E1212 17:26:39.095512 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.095955 kubelet[3664]: E1212 17:26:39.095935 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.096133 kubelet[3664]: W1212 17:26:39.096028 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.096133 kubelet[3664]: E1212 17:26:39.096045 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.096468 kubelet[3664]: E1212 17:26:39.096375 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.096468 kubelet[3664]: W1212 17:26:39.096388 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.096468 kubelet[3664]: E1212 17:26:39.096399 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.096723 kubelet[3664]: E1212 17:26:39.096668 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.096723 kubelet[3664]: W1212 17:26:39.096679 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.096723 kubelet[3664]: E1212 17:26:39.096690 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.097354 kubelet[3664]: E1212 17:26:39.097244 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.097354 kubelet[3664]: W1212 17:26:39.097258 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.097354 kubelet[3664]: E1212 17:26:39.097269 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.097651 kubelet[3664]: E1212 17:26:39.097637 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.097822 kubelet[3664]: W1212 17:26:39.097721 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.097822 kubelet[3664]: E1212 17:26:39.097737 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.098241 kubelet[3664]: E1212 17:26:39.098136 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.098241 kubelet[3664]: W1212 17:26:39.098149 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.098241 kubelet[3664]: E1212 17:26:39.098159 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.098565 kubelet[3664]: E1212 17:26:39.098489 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.098565 kubelet[3664]: W1212 17:26:39.098500 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.098565 kubelet[3664]: E1212 17:26:39.098511 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.124000 audit: BPF prog-id=190 op=LOAD Dec 12 17:26:39.124000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4189 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135646233656331326531336636663237666462363134663930373631 Dec 12 17:26:39.124000 audit: BPF prog-id=191 op=LOAD Dec 12 17:26:39.124000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4189 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135646233656331326531336636663237666462363134663930373631 Dec 12 17:26:39.124000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:26:39.124000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135646233656331326531336636663237666462363134663930373631 Dec 12 17:26:39.124000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:26:39.124000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135646233656331326531336636663237666462363134663930373631 Dec 12 17:26:39.124000 audit: BPF prog-id=192 op=LOAD Dec 12 17:26:39.124000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4189 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135646233656331326531336636663237666462363134663930373631 Dec 12 17:26:39.148368 containerd[2108]: time="2025-12-12T17:26:39.148121983Z" level=info msg="StartContainer for \"15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c\" returns successfully" Dec 12 17:26:39.153953 kubelet[3664]: E1212 17:26:39.153851 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.154083 kubelet[3664]: W1212 17:26:39.154067 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.154158 kubelet[3664]: E1212 17:26:39.154144 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.154690 kubelet[3664]: E1212 17:26:39.154656 3664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.155819 kubelet[3664]: W1212 17:26:39.155795 3664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.155992 kubelet[3664]: E1212 17:26:39.155938 3664 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.158051 systemd[1]: cri-containerd-15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c.scope: Deactivated successfully. Dec 12 17:26:39.160000 audit: BPF prog-id=192 op=UNLOAD Dec 12 17:26:39.162507 containerd[2108]: time="2025-12-12T17:26:39.162461570Z" level=info msg="received container exit event container_id:\"15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c\" id:\"15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c\" pid:4358 exited_at:{seconds:1765560399 nanos:161325777}" Dec 12 17:26:39.185250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15db3ec12e13f6f27fdb614f90761f654fcde7eec4df2073a836193ef3572e7c-rootfs.mount: Deactivated successfully. Dec 12 17:26:40.898047 kubelet[3664]: E1212 17:26:40.897834 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:26:41.000263 containerd[2108]: time="2025-12-12T17:26:41.000146077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:26:42.898926 kubelet[3664]: E1212 17:26:42.897844 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:26:43.186975 containerd[2108]: time="2025-12-12T17:26:43.186414503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:43.190422 containerd[2108]: time="2025-12-12T17:26:43.190377724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65923959" Dec 12 17:26:43.193813 containerd[2108]: time="2025-12-12T17:26:43.193785045Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:43.198017 containerd[2108]: time="2025-12-12T17:26:43.197970902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:43.198579 containerd[2108]: time="2025-12-12T17:26:43.198268173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.197534651s" Dec 12 17:26:43.198579 containerd[2108]: time="2025-12-12T17:26:43.198297701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:26:43.201619 containerd[2108]: time="2025-12-12T17:26:43.201590052Z" level=info msg="CreateContainer within sandbox \"1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:26:43.227556 containerd[2108]: time="2025-12-12T17:26:43.227507526Z" level=info msg="Container 748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:43.245922 containerd[2108]: time="2025-12-12T17:26:43.245843086Z" level=info msg="CreateContainer within sandbox \"1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97\"" Dec 12 17:26:43.246551 containerd[2108]: time="2025-12-12T17:26:43.246510836Z" level=info msg="StartContainer for \"748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97\"" Dec 12 17:26:43.247889 containerd[2108]: time="2025-12-12T17:26:43.247808040Z" level=info msg="connecting to shim 748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97" address="unix:///run/containerd/s/d4aeee1ecd316b1657393bd07df09e8aeb587297d0d2851882bfb960a0268635" protocol=ttrpc version=3 Dec 12 17:26:43.273310 systemd[1]: Started cri-containerd-748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97.scope - libcontainer container 748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97. Dec 12 17:26:43.311000 audit: BPF prog-id=193 op=LOAD Dec 12 17:26:43.320549 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 12 17:26:43.320648 kernel: audit: type=1334 audit(1765560403.311:596): prog-id=193 op=LOAD Dec 12 17:26:43.311000 audit[4416]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4189 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734386464326533393265343537616563323237366332356435366237 Dec 12 17:26:43.359885 kernel: audit: type=1300 audit(1765560403.311:596): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4189 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.359995 kernel: audit: type=1327 audit(1765560403.311:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734386464326533393265343537616563323237366332356435366237 Dec 12 17:26:43.311000 audit: BPF prog-id=194 op=LOAD Dec 12 17:26:43.367206 kernel: audit: type=1334 audit(1765560403.311:597): prog-id=194 op=LOAD Dec 12 17:26:43.311000 audit[4416]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4189 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.387644 kernel: audit: type=1300 audit(1765560403.311:597): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4189 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.406968 kernel: audit: type=1327 audit(1765560403.311:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734386464326533393265343537616563323237366332356435366237 Dec 12 17:26:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734386464326533393265343537616563323237366332356435366237 Dec 12 17:26:43.311000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:26:43.414262 kernel: audit: type=1334 audit(1765560403.311:598): prog-id=194 op=UNLOAD Dec 12 17:26:43.311000 audit[4416]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.431286 kernel: audit: type=1300 audit(1765560403.311:598): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734386464326533393265343537616563323237366332356435366237 Dec 12 17:26:43.450567 kernel: audit: type=1327 audit(1765560403.311:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734386464326533393265343537616563323237366332356435366237 Dec 12 17:26:43.311000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:26:43.456238 kernel: audit: type=1334 audit(1765560403.311:599): prog-id=193 op=UNLOAD Dec 12 17:26:43.311000 audit[4416]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734386464326533393265343537616563323237366332356435366237 Dec 12 17:26:43.311000 audit: BPF prog-id=195 op=LOAD Dec 12 17:26:43.311000 audit[4416]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4189 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734386464326533393265343537616563323237366332356435366237 Dec 12 17:26:43.463749 containerd[2108]: time="2025-12-12T17:26:43.463717008Z" level=info msg="StartContainer for \"748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97\" returns successfully" Dec 12 17:26:44.610383 systemd[1]: cri-containerd-748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97.scope: Deactivated successfully. Dec 12 17:26:44.610694 systemd[1]: cri-containerd-748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97.scope: Consumed 331ms CPU time, 184.3M memory peak, 165.9M written to disk. Dec 12 17:26:44.612752 containerd[2108]: time="2025-12-12T17:26:44.612707502Z" level=info msg="received container exit event container_id:\"748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97\" id:\"748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97\" pid:4429 exited_at:{seconds:1765560404 nanos:612380879}" Dec 12 17:26:44.613000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:26:44.633995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-748dd2e392e457aec2276c25d56b7e28f59e39ede0b96e3cf09524866fb07b97-rootfs.mount: Deactivated successfully. Dec 12 17:26:44.644621 kubelet[3664]: I1212 17:26:44.644369 3664 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:26:44.693265 systemd[1]: Created slice kubepods-besteffort-pod3862f9ed_932b_48f5_bc48_007596c724c2.slice - libcontainer container kubepods-besteffort-pod3862f9ed_932b_48f5_bc48_007596c724c2.slice. Dec 12 17:26:44.701507 systemd[1]: Created slice kubepods-burstable-podccdb2e15_30cf_4c79_a23a_01e0f8e2bcb6.slice - libcontainer container kubepods-burstable-podccdb2e15_30cf_4c79_a23a_01e0f8e2bcb6.slice. Dec 12 17:26:44.997291 kubelet[3664]: W1212 17:26:44.709393 3664 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4515.1.0-a-74f46d5ce1" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4515.1.0-a-74f46d5ce1' and this object Dec 12 17:26:44.997291 kubelet[3664]: W1212 17:26:44.710105 3664 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4515.1.0-a-74f46d5ce1" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4515.1.0-a-74f46d5ce1' and this object Dec 12 17:26:44.997291 kubelet[3664]: E1212 17:26:44.710107 3664 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4515.1.0-a-74f46d5ce1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515.1.0-a-74f46d5ce1' and this object" logger="UnhandledError" Dec 12 17:26:44.997291 kubelet[3664]: E1212 17:26:44.710125 3664 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4515.1.0-a-74f46d5ce1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515.1.0-a-74f46d5ce1' and this object" logger="UnhandledError" Dec 12 17:26:44.997291 kubelet[3664]: W1212 17:26:44.710175 3664 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4515.1.0-a-74f46d5ce1" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4515.1.0-a-74f46d5ce1' and this object Dec 12 17:26:44.713028 systemd[1]: Created slice kubepods-besteffort-podb5a783ab_afe4_428e_907f_e91738bea7d8.slice - libcontainer container kubepods-besteffort-podb5a783ab_afe4_428e_907f_e91738bea7d8.slice. Dec 12 17:26:44.997468 kubelet[3664]: E1212 17:26:44.710187 3664 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4515.1.0-a-74f46d5ce1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515.1.0-a-74f46d5ce1' and this object" logger="UnhandledError" Dec 12 17:26:44.997468 kubelet[3664]: I1212 17:26:44.797508 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qbb7\" (UniqueName: \"kubernetes.io/projected/b5a783ab-afe4-428e-907f-e91738bea7d8-kube-api-access-9qbb7\") pod \"whisker-799cd88dbb-bvhhr\" (UID: \"b5a783ab-afe4-428e-907f-e91738bea7d8\") " pod="calico-system/whisker-799cd88dbb-bvhhr" Dec 12 17:26:44.997468 kubelet[3664]: I1212 17:26:44.797538 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6-config-volume\") pod \"coredns-668d6bf9bc-rfcrw\" (UID: \"ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6\") " pod="kube-system/coredns-668d6bf9bc-rfcrw" Dec 12 17:26:44.997468 kubelet[3664]: I1212 17:26:44.798048 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444315f1-eff9-48ed-a7dc-c9b319819cb8-config\") pod \"goldmane-666569f655-8dz6b\" (UID: \"444315f1-eff9-48ed-a7dc-c9b319819cb8\") " pod="calico-system/goldmane-666569f655-8dz6b" Dec 12 17:26:44.997468 kubelet[3664]: I1212 17:26:44.798172 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxctt\" (UniqueName: \"kubernetes.io/projected/f432633b-5af0-45f1-805a-f300620eb030-kube-api-access-gxctt\") pod \"coredns-668d6bf9bc-7g82h\" (UID: \"f432633b-5af0-45f1-805a-f300620eb030\") " pod="kube-system/coredns-668d6bf9bc-7g82h" Dec 12 17:26:44.722109 systemd[1]: Created slice kubepods-besteffort-podea3fa3f4_48d1_49ff_b968_783d9802a6b3.slice - libcontainer container kubepods-besteffort-podea3fa3f4_48d1_49ff_b968_783d9802a6b3.slice. Dec 12 17:26:44.997579 kubelet[3664]: I1212 17:26:44.798192 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw8jn\" (UniqueName: \"kubernetes.io/projected/e108b102-1b18-4781-b323-af4f0e442eb0-kube-api-access-lw8jn\") pod \"calico-apiserver-5b7d8d9766-d9xlz\" (UID: \"e108b102-1b18-4781-b323-af4f0e442eb0\") " pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" Dec 12 17:26:44.997579 kubelet[3664]: I1212 17:26:44.798206 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/444315f1-eff9-48ed-a7dc-c9b319819cb8-goldmane-key-pair\") pod \"goldmane-666569f655-8dz6b\" (UID: \"444315f1-eff9-48ed-a7dc-c9b319819cb8\") " pod="calico-system/goldmane-666569f655-8dz6b" Dec 12 17:26:44.997579 kubelet[3664]: I1212 17:26:44.798218 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f432633b-5af0-45f1-805a-f300620eb030-config-volume\") pod \"coredns-668d6bf9bc-7g82h\" (UID: \"f432633b-5af0-45f1-805a-f300620eb030\") " pod="kube-system/coredns-668d6bf9bc-7g82h" Dec 12 17:26:44.997579 kubelet[3664]: I1212 17:26:44.798230 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/444315f1-eff9-48ed-a7dc-c9b319819cb8-goldmane-ca-bundle\") pod \"goldmane-666569f655-8dz6b\" (UID: \"444315f1-eff9-48ed-a7dc-c9b319819cb8\") " pod="calico-system/goldmane-666569f655-8dz6b" Dec 12 17:26:44.997579 kubelet[3664]: I1212 17:26:44.798244 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea3fa3f4-48d1-49ff-b968-783d9802a6b3-calico-apiserver-certs\") pod \"calico-apiserver-5b7d8d9766-mzmnd\" (UID: \"ea3fa3f4-48d1-49ff-b968-783d9802a6b3\") " pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" Dec 12 17:26:44.731272 systemd[1]: Created slice kubepods-burstable-podf432633b_5af0_45f1_805a_f300620eb030.slice - libcontainer container kubepods-burstable-podf432633b_5af0_45f1_805a_f300620eb030.slice. Dec 12 17:26:44.997687 kubelet[3664]: I1212 17:26:44.798255 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7slkq\" (UniqueName: \"kubernetes.io/projected/ea3fa3f4-48d1-49ff-b968-783d9802a6b3-kube-api-access-7slkq\") pod \"calico-apiserver-5b7d8d9766-mzmnd\" (UID: \"ea3fa3f4-48d1-49ff-b968-783d9802a6b3\") " pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" Dec 12 17:26:44.997687 kubelet[3664]: I1212 17:26:44.798267 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6cgb\" (UniqueName: \"kubernetes.io/projected/ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6-kube-api-access-w6cgb\") pod \"coredns-668d6bf9bc-rfcrw\" (UID: \"ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6\") " pod="kube-system/coredns-668d6bf9bc-rfcrw" Dec 12 17:26:44.997687 kubelet[3664]: I1212 17:26:44.798282 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e108b102-1b18-4781-b323-af4f0e442eb0-calico-apiserver-certs\") pod \"calico-apiserver-5b7d8d9766-d9xlz\" (UID: \"e108b102-1b18-4781-b323-af4f0e442eb0\") " pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" Dec 12 17:26:44.997687 kubelet[3664]: I1212 17:26:44.798294 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rhps\" (UniqueName: \"kubernetes.io/projected/444315f1-eff9-48ed-a7dc-c9b319819cb8-kube-api-access-2rhps\") pod \"goldmane-666569f655-8dz6b\" (UID: \"444315f1-eff9-48ed-a7dc-c9b319819cb8\") " pod="calico-system/goldmane-666569f655-8dz6b" Dec 12 17:26:44.997687 kubelet[3664]: I1212 17:26:44.798309 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b5a783ab-afe4-428e-907f-e91738bea7d8-whisker-backend-key-pair\") pod \"whisker-799cd88dbb-bvhhr\" (UID: \"b5a783ab-afe4-428e-907f-e91738bea7d8\") " pod="calico-system/whisker-799cd88dbb-bvhhr" Dec 12 17:26:44.736413 systemd[1]: Created slice kubepods-besteffort-pod444315f1_eff9_48ed_a7dc_c9b319819cb8.slice - libcontainer container kubepods-besteffort-pod444315f1_eff9_48ed_a7dc_c9b319819cb8.slice. Dec 12 17:26:44.997795 kubelet[3664]: I1212 17:26:44.798319 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a783ab-afe4-428e-907f-e91738bea7d8-whisker-ca-bundle\") pod \"whisker-799cd88dbb-bvhhr\" (UID: \"b5a783ab-afe4-428e-907f-e91738bea7d8\") " pod="calico-system/whisker-799cd88dbb-bvhhr" Dec 12 17:26:44.997795 kubelet[3664]: I1212 17:26:44.798328 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3862f9ed-932b-48f5-bc48-007596c724c2-tigera-ca-bundle\") pod \"calico-kube-controllers-7cb6ff9884-zb77j\" (UID: \"3862f9ed-932b-48f5-bc48-007596c724c2\") " pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" Dec 12 17:26:44.997795 kubelet[3664]: I1212 17:26:44.798338 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7l28\" (UniqueName: \"kubernetes.io/projected/3862f9ed-932b-48f5-bc48-007596c724c2-kube-api-access-n7l28\") pod \"calico-kube-controllers-7cb6ff9884-zb77j\" (UID: \"3862f9ed-932b-48f5-bc48-007596c724c2\") " pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" Dec 12 17:26:44.741846 systemd[1]: Created slice kubepods-besteffort-pode108b102_1b18_4781_b323_af4f0e442eb0.slice - libcontainer container kubepods-besteffort-pode108b102_1b18_4781_b323_af4f0e442eb0.slice. Dec 12 17:26:44.909798 systemd[1]: Created slice kubepods-besteffort-podb9c95eef_5d4b_4e75_a0b3_a9dd678f3f4b.slice - libcontainer container kubepods-besteffort-podb9c95eef_5d4b_4e75_a0b3_a9dd678f3f4b.slice. Dec 12 17:26:45.000507 containerd[2108]: time="2025-12-12T17:26:44.999212382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trqfx,Uid:b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:45.023183 containerd[2108]: time="2025-12-12T17:26:45.023150006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7g82h,Uid:f432633b-5af0-45f1-805a-f300620eb030,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:45.024248 containerd[2108]: time="2025-12-12T17:26:45.024223613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7d8d9766-d9xlz,Uid:e108b102-1b18-4781-b323-af4f0e442eb0,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:26:45.025106 containerd[2108]: time="2025-12-12T17:26:45.024978621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7d8d9766-mzmnd,Uid:ea3fa3f4-48d1-49ff-b968-783d9802a6b3,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:26:45.301357 containerd[2108]: time="2025-12-12T17:26:45.301254976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ff9884-zb77j,Uid:3862f9ed-932b-48f5-bc48-007596c724c2,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:45.320465 containerd[2108]: time="2025-12-12T17:26:45.320377409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-799cd88dbb-bvhhr,Uid:b5a783ab-afe4-428e-907f-e91738bea7d8,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:45.320465 containerd[2108]: time="2025-12-12T17:26:45.320377601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfcrw,Uid:ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:45.610682 containerd[2108]: time="2025-12-12T17:26:45.610483012Z" level=error msg="Failed to destroy network for sandbox \"41077c5e00fcd9ff6e8965ef53bd99573895a3e6c0ad27bb0f0f024aa1af99ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.620093 containerd[2108]: time="2025-12-12T17:26:45.619972495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ff9884-zb77j,Uid:3862f9ed-932b-48f5-bc48-007596c724c2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41077c5e00fcd9ff6e8965ef53bd99573895a3e6c0ad27bb0f0f024aa1af99ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.620901 kubelet[3664]: E1212 17:26:45.620804 3664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41077c5e00fcd9ff6e8965ef53bd99573895a3e6c0ad27bb0f0f024aa1af99ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.621053 kubelet[3664]: E1212 17:26:45.620982 3664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41077c5e00fcd9ff6e8965ef53bd99573895a3e6c0ad27bb0f0f024aa1af99ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" Dec 12 17:26:45.621053 kubelet[3664]: E1212 17:26:45.621011 3664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41077c5e00fcd9ff6e8965ef53bd99573895a3e6c0ad27bb0f0f024aa1af99ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" Dec 12 17:26:45.621385 kubelet[3664]: E1212 17:26:45.621055 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb6ff9884-zb77j_calico-system(3862f9ed-932b-48f5-bc48-007596c724c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb6ff9884-zb77j_calico-system(3862f9ed-932b-48f5-bc48-007596c724c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41077c5e00fcd9ff6e8965ef53bd99573895a3e6c0ad27bb0f0f024aa1af99ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:26:45.661476 containerd[2108]: time="2025-12-12T17:26:45.661309363Z" level=error msg="Failed to destroy network for sandbox \"e3f4391901287630ba64ad5c298d83c5bcf5b23e48a396c24aea2f3787af38c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.665477 systemd[1]: run-netns-cni\x2dfba553fc\x2de895\x2d2db3\x2df347\x2d0a7a5c93f62f.mount: Deactivated successfully. Dec 12 17:26:45.672211 containerd[2108]: time="2025-12-12T17:26:45.672166947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfcrw,Uid:ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f4391901287630ba64ad5c298d83c5bcf5b23e48a396c24aea2f3787af38c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.672436 kubelet[3664]: E1212 17:26:45.672387 3664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f4391901287630ba64ad5c298d83c5bcf5b23e48a396c24aea2f3787af38c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.672961 kubelet[3664]: E1212 17:26:45.672443 3664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f4391901287630ba64ad5c298d83c5bcf5b23e48a396c24aea2f3787af38c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rfcrw" Dec 12 17:26:45.672961 kubelet[3664]: E1212 17:26:45.672460 3664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f4391901287630ba64ad5c298d83c5bcf5b23e48a396c24aea2f3787af38c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rfcrw" Dec 12 17:26:45.672961 kubelet[3664]: E1212 17:26:45.672502 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rfcrw_kube-system(ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rfcrw_kube-system(ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3f4391901287630ba64ad5c298d83c5bcf5b23e48a396c24aea2f3787af38c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rfcrw" podUID="ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6" Dec 12 17:26:45.675254 containerd[2108]: time="2025-12-12T17:26:45.675217876Z" level=error msg="Failed to destroy network for sandbox \"a910d099fb6c85f6b4f2e3b9d7ff3f096d864b137e926d0c5d10cfe7fb66e8a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.679191 systemd[1]: run-netns-cni\x2df6d5c8f8\x2d47ab\x2d2d3a\x2d036b\x2d14ed9ced0225.mount: Deactivated successfully. Dec 12 17:26:45.686929 containerd[2108]: time="2025-12-12T17:26:45.686886206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7g82h,Uid:f432633b-5af0-45f1-805a-f300620eb030,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a910d099fb6c85f6b4f2e3b9d7ff3f096d864b137e926d0c5d10cfe7fb66e8a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.689903 containerd[2108]: time="2025-12-12T17:26:45.687166396Z" level=error msg="Failed to destroy network for sandbox \"c01298683c3cbe6903295ac544f5f643b2cd9ea8d73de5e4b26f76b560d9830e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.690427 kubelet[3664]: E1212 17:26:45.690179 3664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a910d099fb6c85f6b4f2e3b9d7ff3f096d864b137e926d0c5d10cfe7fb66e8a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.690427 kubelet[3664]: E1212 17:26:45.690245 3664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a910d099fb6c85f6b4f2e3b9d7ff3f096d864b137e926d0c5d10cfe7fb66e8a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7g82h" Dec 12 17:26:45.690427 kubelet[3664]: E1212 17:26:45.690264 3664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a910d099fb6c85f6b4f2e3b9d7ff3f096d864b137e926d0c5d10cfe7fb66e8a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7g82h" Dec 12 17:26:45.692351 kubelet[3664]: E1212 17:26:45.690305 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7g82h_kube-system(f432633b-5af0-45f1-805a-f300620eb030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7g82h_kube-system(f432633b-5af0-45f1-805a-f300620eb030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a910d099fb6c85f6b4f2e3b9d7ff3f096d864b137e926d0c5d10cfe7fb66e8a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7g82h" podUID="f432633b-5af0-45f1-805a-f300620eb030" Dec 12 17:26:45.691823 systemd[1]: run-netns-cni\x2deee8665e\x2d5a53\x2d75f0\x2d7fdf\x2d3f7af093cd7d.mount: Deactivated successfully. Dec 12 17:26:45.692936 containerd[2108]: time="2025-12-12T17:26:45.690122627Z" level=error msg="Failed to destroy network for sandbox \"d51aa7a5f6c8b15cabe67d2feeae584fe652b801fb2eb60e9d9aa7c95762435b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.697083 containerd[2108]: time="2025-12-12T17:26:45.694783543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trqfx,Uid:b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01298683c3cbe6903295ac544f5f643b2cd9ea8d73de5e4b26f76b560d9830e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.695270 systemd[1]: run-netns-cni\x2dd7d6025c\x2db845\x2d3bad\x2d8763\x2d3b8390082ab3.mount: Deactivated successfully. Dec 12 17:26:45.697228 kubelet[3664]: E1212 17:26:45.697115 3664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01298683c3cbe6903295ac544f5f643b2cd9ea8d73de5e4b26f76b560d9830e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.697413 kubelet[3664]: E1212 17:26:45.697170 3664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01298683c3cbe6903295ac544f5f643b2cd9ea8d73de5e4b26f76b560d9830e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-trqfx" Dec 12 17:26:45.697467 kubelet[3664]: E1212 17:26:45.697399 3664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01298683c3cbe6903295ac544f5f643b2cd9ea8d73de5e4b26f76b560d9830e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-trqfx" Dec 12 17:26:45.697496 kubelet[3664]: E1212 17:26:45.697459 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c01298683c3cbe6903295ac544f5f643b2cd9ea8d73de5e4b26f76b560d9830e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:26:45.705890 containerd[2108]: time="2025-12-12T17:26:45.705029706Z" level=error msg="Failed to destroy network for sandbox \"5b66acf815a9f3153b02b6506a617f2aafe335860aeb65a877c2c70734e5a447\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.706093 containerd[2108]: time="2025-12-12T17:26:45.705357065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-799cd88dbb-bvhhr,Uid:b5a783ab-afe4-428e-907f-e91738bea7d8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d51aa7a5f6c8b15cabe67d2feeae584fe652b801fb2eb60e9d9aa7c95762435b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.706233 kubelet[3664]: E1212 17:26:45.706201 3664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d51aa7a5f6c8b15cabe67d2feeae584fe652b801fb2eb60e9d9aa7c95762435b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.706301 kubelet[3664]: E1212 17:26:45.706274 3664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d51aa7a5f6c8b15cabe67d2feeae584fe652b801fb2eb60e9d9aa7c95762435b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-799cd88dbb-bvhhr" Dec 12 17:26:45.706348 kubelet[3664]: E1212 17:26:45.706332 3664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d51aa7a5f6c8b15cabe67d2feeae584fe652b801fb2eb60e9d9aa7c95762435b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-799cd88dbb-bvhhr" Dec 12 17:26:45.706403 kubelet[3664]: E1212 17:26:45.706375 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-799cd88dbb-bvhhr_calico-system(b5a783ab-afe4-428e-907f-e91738bea7d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-799cd88dbb-bvhhr_calico-system(b5a783ab-afe4-428e-907f-e91738bea7d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d51aa7a5f6c8b15cabe67d2feeae584fe652b801fb2eb60e9d9aa7c95762435b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-799cd88dbb-bvhhr" podUID="b5a783ab-afe4-428e-907f-e91738bea7d8" Dec 12 17:26:45.710155 containerd[2108]: time="2025-12-12T17:26:45.710122871Z" level=error msg="Failed to destroy network for sandbox \"cf765f6e7f37ddeca9945378ecc2609c84e51df08021729ef788694ef76d5dfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.712174 containerd[2108]: time="2025-12-12T17:26:45.712087953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7d8d9766-d9xlz,Uid:e108b102-1b18-4781-b323-af4f0e442eb0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b66acf815a9f3153b02b6506a617f2aafe335860aeb65a877c2c70734e5a447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.712405 kubelet[3664]: E1212 17:26:45.712362 3664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b66acf815a9f3153b02b6506a617f2aafe335860aeb65a877c2c70734e5a447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.712465 kubelet[3664]: E1212 17:26:45.712413 3664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b66acf815a9f3153b02b6506a617f2aafe335860aeb65a877c2c70734e5a447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" Dec 12 17:26:45.712465 kubelet[3664]: E1212 17:26:45.712426 3664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b66acf815a9f3153b02b6506a617f2aafe335860aeb65a877c2c70734e5a447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" Dec 12 17:26:45.712508 kubelet[3664]: E1212 17:26:45.712457 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b7d8d9766-d9xlz_calico-apiserver(e108b102-1b18-4781-b323-af4f0e442eb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b7d8d9766-d9xlz_calico-apiserver(e108b102-1b18-4781-b323-af4f0e442eb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b66acf815a9f3153b02b6506a617f2aafe335860aeb65a877c2c70734e5a447\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:26:45.719272 containerd[2108]: time="2025-12-12T17:26:45.719162280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7d8d9766-mzmnd,Uid:ea3fa3f4-48d1-49ff-b968-783d9802a6b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf765f6e7f37ddeca9945378ecc2609c84e51df08021729ef788694ef76d5dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.719499 kubelet[3664]: E1212 17:26:45.719455 3664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf765f6e7f37ddeca9945378ecc2609c84e51df08021729ef788694ef76d5dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:45.719566 kubelet[3664]: E1212 17:26:45.719506 3664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf765f6e7f37ddeca9945378ecc2609c84e51df08021729ef788694ef76d5dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" Dec 12 17:26:45.719566 kubelet[3664]: E1212 17:26:45.719521 3664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf765f6e7f37ddeca9945378ecc2609c84e51df08021729ef788694ef76d5dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" Dec 12 17:26:45.719566 kubelet[3664]: E1212 17:26:45.719552 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b7d8d9766-mzmnd_calico-apiserver(ea3fa3f4-48d1-49ff-b968-783d9802a6b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b7d8d9766-mzmnd_calico-apiserver(ea3fa3f4-48d1-49ff-b968-783d9802a6b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf765f6e7f37ddeca9945378ecc2609c84e51df08021729ef788694ef76d5dfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:26:45.898983 kubelet[3664]: E1212 17:26:45.898950 3664 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 17:26:45.899114 kubelet[3664]: E1212 17:26:45.899103 3664 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/444315f1-eff9-48ed-a7dc-c9b319819cb8-goldmane-ca-bundle podName:444315f1-eff9-48ed-a7dc-c9b319819cb8 nodeName:}" failed. No retries permitted until 2025-12-12 17:26:46.399072222 +0000 UTC m=+38.572344859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/444315f1-eff9-48ed-a7dc-c9b319819cb8-goldmane-ca-bundle") pod "goldmane-666569f655-8dz6b" (UID: "444315f1-eff9-48ed-a7dc-c9b319819cb8") : failed to sync configmap cache: timed out waiting for the condition Dec 12 17:26:46.030768 containerd[2108]: time="2025-12-12T17:26:46.030651763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:26:46.520556 containerd[2108]: time="2025-12-12T17:26:46.520511493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8dz6b,Uid:444315f1-eff9-48ed-a7dc-c9b319819cb8,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:46.562740 containerd[2108]: time="2025-12-12T17:26:46.562691130Z" level=error msg="Failed to destroy network for sandbox \"335ebc55726b16f923285b042b2cb6d46350a00a2407a057161397c61309f6f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:46.570386 containerd[2108]: time="2025-12-12T17:26:46.570337683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8dz6b,Uid:444315f1-eff9-48ed-a7dc-c9b319819cb8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"335ebc55726b16f923285b042b2cb6d46350a00a2407a057161397c61309f6f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:46.570597 kubelet[3664]: E1212 17:26:46.570558 3664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"335ebc55726b16f923285b042b2cb6d46350a00a2407a057161397c61309f6f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:46.570646 kubelet[3664]: E1212 17:26:46.570625 3664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"335ebc55726b16f923285b042b2cb6d46350a00a2407a057161397c61309f6f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8dz6b" Dec 12 17:26:46.570669 kubelet[3664]: E1212 17:26:46.570645 3664 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"335ebc55726b16f923285b042b2cb6d46350a00a2407a057161397c61309f6f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8dz6b" Dec 12 17:26:46.570713 kubelet[3664]: E1212 17:26:46.570686 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8dz6b_calico-system(444315f1-eff9-48ed-a7dc-c9b319819cb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8dz6b_calico-system(444315f1-eff9-48ed-a7dc-c9b319819cb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"335ebc55726b16f923285b042b2cb6d46350a00a2407a057161397c61309f6f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:26:46.635834 systemd[1]: run-netns-cni\x2d3ce50b68\x2df51b\x2d03a2\x2dd3ae\x2d8a44a6ac8c0a.mount: Deactivated successfully. Dec 12 17:26:46.636106 systemd[1]: run-netns-cni\x2d235a6bcd\x2d82d4\x2d870c\x2d33b0\x2da2c6c9c92d61.mount: Deactivated successfully. Dec 12 17:26:49.828191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount547547972.mount: Deactivated successfully. Dec 12 17:26:50.320388 containerd[2108]: time="2025-12-12T17:26:50.320304212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:26:50.330750 containerd[2108]: time="2025-12-12T17:26:50.330693513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.299649478s" Dec 12 17:26:50.330750 containerd[2108]: time="2025-12-12T17:26:50.330741874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:26:50.343671 containerd[2108]: time="2025-12-12T17:26:50.340525579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:50.343671 containerd[2108]: time="2025-12-12T17:26:50.343015642Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:50.343671 containerd[2108]: time="2025-12-12T17:26:50.343373265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:50.344775 containerd[2108]: time="2025-12-12T17:26:50.344744448Z" level=info msg="CreateContainer within sandbox \"1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:26:50.378588 containerd[2108]: time="2025-12-12T17:26:50.378536003Z" level=info msg="Container 335a8f55e4d32409cd389f8d38ed06f15e4691309cab70c36918d4595332f9ee: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:50.408601 containerd[2108]: time="2025-12-12T17:26:50.408551874Z" level=info msg="CreateContainer within sandbox \"1accfab34032677cb1967f0961cb02004bc814d6eba795916239da483e73d994\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"335a8f55e4d32409cd389f8d38ed06f15e4691309cab70c36918d4595332f9ee\"" Dec 12 17:26:50.409437 containerd[2108]: time="2025-12-12T17:26:50.409207920Z" level=info msg="StartContainer for \"335a8f55e4d32409cd389f8d38ed06f15e4691309cab70c36918d4595332f9ee\"" Dec 12 17:26:50.410553 containerd[2108]: time="2025-12-12T17:26:50.410526414Z" level=info msg="connecting to shim 335a8f55e4d32409cd389f8d38ed06f15e4691309cab70c36918d4595332f9ee" address="unix:///run/containerd/s/d4aeee1ecd316b1657393bd07df09e8aeb587297d0d2851882bfb960a0268635" protocol=ttrpc version=3 Dec 12 17:26:50.447056 systemd[1]: Started cri-containerd-335a8f55e4d32409cd389f8d38ed06f15e4691309cab70c36918d4595332f9ee.scope - libcontainer container 335a8f55e4d32409cd389f8d38ed06f15e4691309cab70c36918d4595332f9ee. Dec 12 17:26:50.490908 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 12 17:26:50.491048 kernel: audit: type=1334 audit(1765560410.486:602): prog-id=196 op=LOAD Dec 12 17:26:50.486000 audit: BPF prog-id=196 op=LOAD Dec 12 17:26:50.511933 kernel: audit: type=1300 audit(1765560410.486:602): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b83e8 a2=98 a3=0 items=0 ppid=4189 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.486000 audit[4703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b83e8 a2=98 a3=0 items=0 ppid=4189 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333356138663535653464333234303963643338396638643338656430 Dec 12 17:26:50.493000 audit: BPF prog-id=197 op=LOAD Dec 12 17:26:50.535340 kernel: audit: type=1327 audit(1765560410.486:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333356138663535653464333234303963643338396638643338656430 Dec 12 17:26:50.535424 kernel: audit: type=1334 audit(1765560410.493:603): prog-id=197 op=LOAD Dec 12 17:26:50.493000 audit[4703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b8168 a2=98 a3=0 items=0 ppid=4189 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.551938 kernel: audit: type=1300 audit(1765560410.493:603): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b8168 a2=98 a3=0 items=0 ppid=4189 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333356138663535653464333234303963643338396638643338656430 Dec 12 17:26:50.570009 kernel: audit: type=1327 audit(1765560410.493:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333356138663535653464333234303963643338396638643338656430 Dec 12 17:26:50.493000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:26:50.576418 kernel: audit: type=1334 audit(1765560410.493:604): prog-id=197 op=UNLOAD Dec 12 17:26:50.493000 audit[4703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.594030 kernel: audit: type=1300 audit(1765560410.493:604): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333356138663535653464333234303963643338396638643338656430 Dec 12 17:26:50.612640 kernel: audit: type=1327 audit(1765560410.493:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333356138663535653464333234303963643338396638643338656430 Dec 12 17:26:50.494000 audit: BPF prog-id=196 op=UNLOAD Dec 12 17:26:50.617841 kernel: audit: type=1334 audit(1765560410.494:605): prog-id=196 op=UNLOAD Dec 12 17:26:50.494000 audit[4703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4189 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333356138663535653464333234303963643338396638643338656430 Dec 12 17:26:50.494000 audit: BPF prog-id=198 op=LOAD Dec 12 17:26:50.494000 audit[4703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b8648 a2=98 a3=0 items=0 ppid=4189 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333356138663535653464333234303963643338396638643338656430 Dec 12 17:26:50.631840 containerd[2108]: time="2025-12-12T17:26:50.631806912Z" level=info msg="StartContainer for \"335a8f55e4d32409cd389f8d38ed06f15e4691309cab70c36918d4595332f9ee\" returns successfully" Dec 12 17:26:51.010636 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:26:51.010775 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:26:51.068418 kubelet[3664]: I1212 17:26:51.068288 3664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wc48d" podStartSLOduration=1.527278397 podStartE2EDuration="16.068258943s" podCreationTimestamp="2025-12-12 17:26:35 +0000 UTC" firstStartedPulling="2025-12-12 17:26:35.790770559 +0000 UTC m=+27.964043188" lastFinishedPulling="2025-12-12 17:26:50.331751105 +0000 UTC m=+42.505023734" observedRunningTime="2025-12-12 17:26:51.067225912 +0000 UTC m=+43.240498573" watchObservedRunningTime="2025-12-12 17:26:51.068258943 +0000 UTC m=+43.241531580" Dec 12 17:26:51.297076 kubelet[3664]: I1212 17:26:51.296645 3664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qbb7\" (UniqueName: \"kubernetes.io/projected/b5a783ab-afe4-428e-907f-e91738bea7d8-kube-api-access-9qbb7\") pod \"b5a783ab-afe4-428e-907f-e91738bea7d8\" (UID: \"b5a783ab-afe4-428e-907f-e91738bea7d8\") " Dec 12 17:26:51.297076 kubelet[3664]: I1212 17:26:51.296714 3664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a783ab-afe4-428e-907f-e91738bea7d8-whisker-ca-bundle\") pod \"b5a783ab-afe4-428e-907f-e91738bea7d8\" (UID: \"b5a783ab-afe4-428e-907f-e91738bea7d8\") " Dec 12 17:26:51.297076 kubelet[3664]: I1212 17:26:51.296904 3664 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b5a783ab-afe4-428e-907f-e91738bea7d8-whisker-backend-key-pair\") pod \"b5a783ab-afe4-428e-907f-e91738bea7d8\" (UID: \"b5a783ab-afe4-428e-907f-e91738bea7d8\") " Dec 12 17:26:51.306128 kubelet[3664]: I1212 17:26:51.306089 3664 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a783ab-afe4-428e-907f-e91738bea7d8-kube-api-access-9qbb7" (OuterVolumeSpecName: "kube-api-access-9qbb7") pod "b5a783ab-afe4-428e-907f-e91738bea7d8" (UID: "b5a783ab-afe4-428e-907f-e91738bea7d8"). InnerVolumeSpecName "kube-api-access-9qbb7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:26:51.306432 kubelet[3664]: I1212 17:26:51.306373 3664 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a783ab-afe4-428e-907f-e91738bea7d8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b5a783ab-afe4-428e-907f-e91738bea7d8" (UID: "b5a783ab-afe4-428e-907f-e91738bea7d8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:26:51.307946 systemd[1]: var-lib-kubelet-pods-b5a783ab\x2dafe4\x2d428e\x2d907f\x2de91738bea7d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9qbb7.mount: Deactivated successfully. Dec 12 17:26:51.313927 systemd[1]: var-lib-kubelet-pods-b5a783ab\x2dafe4\x2d428e\x2d907f\x2de91738bea7d8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:26:51.315367 kubelet[3664]: I1212 17:26:51.315331 3664 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a783ab-afe4-428e-907f-e91738bea7d8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b5a783ab-afe4-428e-907f-e91738bea7d8" (UID: "b5a783ab-afe4-428e-907f-e91738bea7d8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:26:51.397526 kubelet[3664]: I1212 17:26:51.397444 3664 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9qbb7\" (UniqueName: \"kubernetes.io/projected/b5a783ab-afe4-428e-907f-e91738bea7d8-kube-api-access-9qbb7\") on node \"ci-4515.1.0-a-74f46d5ce1\" DevicePath \"\"" Dec 12 17:26:51.397526 kubelet[3664]: I1212 17:26:51.397489 3664 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a783ab-afe4-428e-907f-e91738bea7d8-whisker-ca-bundle\") on node \"ci-4515.1.0-a-74f46d5ce1\" DevicePath \"\"" Dec 12 17:26:51.397526 kubelet[3664]: I1212 17:26:51.397498 3664 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b5a783ab-afe4-428e-907f-e91738bea7d8-whisker-backend-key-pair\") on node \"ci-4515.1.0-a-74f46d5ce1\" DevicePath \"\"" Dec 12 17:26:51.908908 systemd[1]: Removed slice kubepods-besteffort-podb5a783ab_afe4_428e_907f_e91738bea7d8.slice - libcontainer container kubepods-besteffort-podb5a783ab_afe4_428e_907f_e91738bea7d8.slice. Dec 12 17:26:52.166690 systemd[1]: Created slice kubepods-besteffort-pod202224df_4814_4cd7_bd50_d9bc16a19fb7.slice - libcontainer container kubepods-besteffort-pod202224df_4814_4cd7_bd50_d9bc16a19fb7.slice. Dec 12 17:26:52.302001 kubelet[3664]: I1212 17:26:52.301950 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/202224df-4814-4cd7-bd50-d9bc16a19fb7-whisker-backend-key-pair\") pod \"whisker-74445c9dcb-4pqcz\" (UID: \"202224df-4814-4cd7-bd50-d9bc16a19fb7\") " pod="calico-system/whisker-74445c9dcb-4pqcz" Dec 12 17:26:52.302001 kubelet[3664]: I1212 17:26:52.302001 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnww\" (UniqueName: \"kubernetes.io/projected/202224df-4814-4cd7-bd50-d9bc16a19fb7-kube-api-access-clnww\") pod \"whisker-74445c9dcb-4pqcz\" (UID: \"202224df-4814-4cd7-bd50-d9bc16a19fb7\") " pod="calico-system/whisker-74445c9dcb-4pqcz" Dec 12 17:26:52.302001 kubelet[3664]: I1212 17:26:52.302023 3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/202224df-4814-4cd7-bd50-d9bc16a19fb7-whisker-ca-bundle\") pod \"whisker-74445c9dcb-4pqcz\" (UID: \"202224df-4814-4cd7-bd50-d9bc16a19fb7\") " pod="calico-system/whisker-74445c9dcb-4pqcz" Dec 12 17:26:52.471382 containerd[2108]: time="2025-12-12T17:26:52.471271199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74445c9dcb-4pqcz,Uid:202224df-4814-4cd7-bd50-d9bc16a19fb7,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:52.898760 systemd-networkd[1689]: cali7f0ab63e899: Link UP Dec 12 17:26:52.899317 systemd-networkd[1689]: cali7f0ab63e899: Gained carrier Dec 12 17:26:52.915389 containerd[2108]: 2025-12-12 17:26:52.718 [INFO][4907] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:26:52.915389 containerd[2108]: 2025-12-12 17:26:52.773 [INFO][4907] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0 whisker-74445c9dcb- calico-system 202224df-4814-4cd7-bd50-d9bc16a19fb7 871 0 2025-12-12 17:26:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74445c9dcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-a-74f46d5ce1 whisker-74445c9dcb-4pqcz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7f0ab63e899 [] [] }} ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Namespace="calico-system" Pod="whisker-74445c9dcb-4pqcz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-" Dec 12 17:26:52.915389 containerd[2108]: 2025-12-12 17:26:52.774 [INFO][4907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Namespace="calico-system" Pod="whisker-74445c9dcb-4pqcz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" Dec 12 17:26:52.915389 containerd[2108]: 2025-12-12 17:26:52.793 [INFO][4918] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" HandleID="k8s-pod-network.e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" Dec 12 17:26:52.915687 containerd[2108]: 2025-12-12 17:26:52.793 [INFO][4918] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" HandleID="k8s-pod-network.e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-74f46d5ce1", "pod":"whisker-74445c9dcb-4pqcz", "timestamp":"2025-12-12 17:26:52.793053647 +0000 UTC"}, Hostname:"ci-4515.1.0-a-74f46d5ce1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:26:52.915687 containerd[2108]: 2025-12-12 17:26:52.793 [INFO][4918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:26:52.915687 containerd[2108]: 2025-12-12 17:26:52.793 [INFO][4918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:26:52.915687 containerd[2108]: 2025-12-12 17:26:52.793 [INFO][4918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-74f46d5ce1' Dec 12 17:26:52.915687 containerd[2108]: 2025-12-12 17:26:52.801 [INFO][4918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:52.915687 containerd[2108]: 2025-12-12 17:26:52.806 [INFO][4918] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:52.915687 containerd[2108]: 2025-12-12 17:26:52.810 [INFO][4918] ipam/ipam.go 511: Trying affinity for 192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:52.915687 containerd[2108]: 2025-12-12 17:26:52.811 [INFO][4918] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:52.915687 containerd[2108]: 2025-12-12 17:26:52.814 [INFO][4918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:52.915881 containerd[2108]: 2025-12-12 17:26:52.814 [INFO][4918] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:52.915881 containerd[2108]: 2025-12-12 17:26:52.815 [INFO][4918] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08 Dec 12 17:26:52.915881 containerd[2108]: 2025-12-12 17:26:52.820 [INFO][4918] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:52.915881 containerd[2108]: 2025-12-12 17:26:52.869 [INFO][4918] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.40.1/26] block=192.168.40.0/26 handle="k8s-pod-network.e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:52.915881 containerd[2108]: 2025-12-12 17:26:52.869 [INFO][4918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.1/26] handle="k8s-pod-network.e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:52.915881 containerd[2108]: 2025-12-12 17:26:52.869 [INFO][4918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:26:52.915881 containerd[2108]: 2025-12-12 17:26:52.869 [INFO][4918] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.40.1/26] IPv6=[] ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" HandleID="k8s-pod-network.e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" Dec 12 17:26:52.916003 containerd[2108]: 2025-12-12 17:26:52.871 [INFO][4907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Namespace="calico-system" Pod="whisker-74445c9dcb-4pqcz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0", GenerateName:"whisker-74445c9dcb-", Namespace:"calico-system", SelfLink:"", UID:"202224df-4814-4cd7-bd50-d9bc16a19fb7", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74445c9dcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"", Pod:"whisker-74445c9dcb-4pqcz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7f0ab63e899", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:52.916003 containerd[2108]: 2025-12-12 17:26:52.871 [INFO][4907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.1/32] ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Namespace="calico-system" Pod="whisker-74445c9dcb-4pqcz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" Dec 12 17:26:52.916091 containerd[2108]: 2025-12-12 17:26:52.871 [INFO][4907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f0ab63e899 ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Namespace="calico-system" Pod="whisker-74445c9dcb-4pqcz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" Dec 12 17:26:52.916091 containerd[2108]: 2025-12-12 17:26:52.899 [INFO][4907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Namespace="calico-system" Pod="whisker-74445c9dcb-4pqcz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" Dec 12 17:26:52.916155 containerd[2108]: 2025-12-12 17:26:52.899 [INFO][4907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Namespace="calico-system" Pod="whisker-74445c9dcb-4pqcz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0", GenerateName:"whisker-74445c9dcb-", Namespace:"calico-system", SelfLink:"", UID:"202224df-4814-4cd7-bd50-d9bc16a19fb7", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74445c9dcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08", Pod:"whisker-74445c9dcb-4pqcz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7f0ab63e899", MAC:"3a:ef:71:b3:04:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:52.916364 containerd[2108]: 2025-12-12 17:26:52.913 [INFO][4907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" Namespace="calico-system" Pod="whisker-74445c9dcb-4pqcz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-whisker--74445c9dcb--4pqcz-eth0" Dec 12 17:26:53.198660 containerd[2108]: time="2025-12-12T17:26:53.198454407Z" level=info msg="connecting to shim e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08" address="unix:///run/containerd/s/c78956688a9a11b75099b61cbdd1d4f4c80d2206a08f2100401ae146c8318e39" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:53.228053 systemd[1]: Started cri-containerd-e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08.scope - libcontainer container e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08. Dec 12 17:26:53.239000 audit: BPF prog-id=199 op=LOAD Dec 12 17:26:53.240000 audit: BPF prog-id=200 op=LOAD Dec 12 17:26:53.240000 audit[4951]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4938 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:53.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530346536356632313438353233653631383933346337303430663934 Dec 12 17:26:53.240000 audit: BPF prog-id=200 op=UNLOAD Dec 12 17:26:53.240000 audit[4951]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4938 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:53.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530346536356632313438353233653631383933346337303430663934 Dec 12 17:26:53.240000 audit: BPF prog-id=201 op=LOAD Dec 12 17:26:53.240000 audit[4951]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4938 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:53.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530346536356632313438353233653631383933346337303430663934 Dec 12 17:26:53.240000 audit: BPF prog-id=202 op=LOAD Dec 12 17:26:53.240000 audit[4951]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4938 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:53.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530346536356632313438353233653631383933346337303430663934 Dec 12 17:26:53.240000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:26:53.240000 audit[4951]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4938 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:53.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530346536356632313438353233653631383933346337303430663934 Dec 12 17:26:53.240000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:26:53.240000 audit[4951]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4938 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:53.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530346536356632313438353233653631383933346337303430663934 Dec 12 17:26:53.240000 audit: BPF prog-id=203 op=LOAD Dec 12 17:26:53.240000 audit[4951]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4938 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:53.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530346536356632313438353233653631383933346337303430663934 Dec 12 17:26:53.270402 containerd[2108]: time="2025-12-12T17:26:53.270364789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74445c9dcb-4pqcz,Uid:202224df-4814-4cd7-bd50-d9bc16a19fb7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e04e65f2148523e618934c7040f9424c6c2539cd54e83e0d6c5c495c6306ee08\"" Dec 12 17:26:53.279785 containerd[2108]: time="2025-12-12T17:26:53.279751916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:26:53.544898 containerd[2108]: time="2025-12-12T17:26:53.544737645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:53.548830 containerd[2108]: time="2025-12-12T17:26:53.548723397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:26:53.548830 containerd[2108]: time="2025-12-12T17:26:53.548769934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:53.549045 kubelet[3664]: E1212 17:26:53.548988 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:26:53.549431 kubelet[3664]: E1212 17:26:53.549057 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:26:53.550656 kubelet[3664]: E1212 17:26:53.550589 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f4c1e637181e476989824504d7e7830e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-clnww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74445c9dcb-4pqcz_calico-system(202224df-4814-4cd7-bd50-d9bc16a19fb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:53.552481 containerd[2108]: time="2025-12-12T17:26:53.552399734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:26:53.823996 containerd[2108]: time="2025-12-12T17:26:53.823834941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:53.828588 containerd[2108]: time="2025-12-12T17:26:53.828492284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:26:53.828588 containerd[2108]: time="2025-12-12T17:26:53.828538205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:53.828781 kubelet[3664]: E1212 17:26:53.828736 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:26:53.828844 kubelet[3664]: E1212 17:26:53.828789 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:26:53.828959 kubelet[3664]: E1212 17:26:53.828908 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clnww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74445c9dcb-4pqcz_calico-system(202224df-4814-4cd7-bd50-d9bc16a19fb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:53.832589 kubelet[3664]: E1212 17:26:53.832539 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:26:53.901258 kubelet[3664]: I1212 17:26:53.901158 3664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a783ab-afe4-428e-907f-e91738bea7d8" path="/var/lib/kubelet/pods/b5a783ab-afe4-428e-907f-e91738bea7d8/volumes" Dec 12 17:26:54.054181 kubelet[3664]: E1212 17:26:54.054140 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:26:54.083000 audit[4997]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:54.083000 audit[4997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc68fc870 a2=0 a3=1 items=0 ppid=3812 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:54.085000 audit[4997]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=4997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:54.085000 audit[4997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc68fc870 a2=0 a3=1 items=0 ppid=3812 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:54.085000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:54.756023 systemd-networkd[1689]: cali7f0ab63e899: Gained IPv6LL Dec 12 17:26:55.057164 kubelet[3664]: E1212 17:26:55.056981 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:26:55.495148 kubelet[3664]: I1212 17:26:55.495001 3664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:26:55.547000 audit[5022]: NETFILTER_CFG table=filter:122 family=2 entries=21 op=nft_register_rule pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:55.552884 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 12 17:26:55.553074 kernel: audit: type=1325 audit(1765560415.547:617): table=filter:122 family=2 entries=21 op=nft_register_rule pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:55.547000 audit[5022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe7a3d2a0 a2=0 a3=1 items=0 ppid=3812 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.582584 kernel: audit: type=1300 audit(1765560415.547:617): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe7a3d2a0 a2=0 a3=1 items=0 ppid=3812 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:55.592902 kernel: audit: type=1327 audit(1765560415.547:617): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:55.562000 audit[5022]: NETFILTER_CFG table=nat:123 family=2 entries=19 op=nft_register_chain pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:55.602959 kernel: audit: type=1325 audit(1765560415.562:618): table=nat:123 family=2 entries=19 op=nft_register_chain pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:55.562000 audit[5022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe7a3d2a0 a2=0 a3=1 items=0 ppid=3812 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.624026 kernel: audit: type=1300 audit(1765560415.562:618): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe7a3d2a0 a2=0 a3=1 items=0 ppid=3812 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:55.634548 kernel: audit: type=1327 audit(1765560415.562:618): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:55.774000 audit: BPF prog-id=204 op=LOAD Dec 12 17:26:55.774000 audit[5055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd11a7d88 a2=98 a3=ffffd11a7d78 items=0 ppid=5024 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.799846 kernel: audit: type=1334 audit(1765560415.774:619): prog-id=204 op=LOAD Dec 12 17:26:55.799975 kernel: audit: type=1300 audit(1765560415.774:619): arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd11a7d88 a2=98 a3=ffffd11a7d78 items=0 ppid=5024 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.774000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:55.821866 kernel: audit: type=1327 audit(1765560415.774:619): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:55.774000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:26:55.837922 kernel: audit: type=1334 audit(1765560415.774:620): prog-id=204 op=UNLOAD Dec 12 17:26:55.774000 audit[5055]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd11a7d58 a3=0 items=0 ppid=5024 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.774000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:55.774000 audit: BPF prog-id=205 op=LOAD Dec 12 17:26:55.774000 audit[5055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd11a7c38 a2=74 a3=95 items=0 ppid=5024 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.774000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:55.774000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:26:55.774000 audit[5055]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5024 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.774000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:55.774000 audit: BPF prog-id=206 op=LOAD Dec 12 17:26:55.774000 audit[5055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd11a7c68 a2=40 a3=ffffd11a7c98 items=0 ppid=5024 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.774000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:55.774000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:26:55.774000 audit[5055]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd11a7c98 items=0 ppid=5024 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.774000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:55.782000 audit: BPF prog-id=207 op=LOAD Dec 12 17:26:55.782000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe8bd2fe8 a2=98 a3=ffffe8bd2fd8 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.821000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:26:55.821000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe8bd2fb8 a3=0 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.821000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.821000 audit: BPF prog-id=208 op=LOAD Dec 12 17:26:55.821000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe8bd2c78 a2=74 a3=95 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.821000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.822000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:26:55.822000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.822000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.822000 audit: BPF prog-id=209 op=LOAD Dec 12 17:26:55.822000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe8bd2cd8 a2=94 a3=2 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.822000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.822000 audit: BPF prog-id=209 op=UNLOAD Dec 12 17:26:55.822000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.822000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.931000 audit: BPF prog-id=210 op=LOAD Dec 12 17:26:55.931000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe8bd2c98 a2=40 a3=ffffe8bd2cc8 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.931000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.931000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:26:55.931000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe8bd2cc8 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.931000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.937000 audit: BPF prog-id=211 op=LOAD Dec 12 17:26:55.937000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe8bd2ca8 a2=94 a3=4 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.937000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.938000 audit: BPF prog-id=211 op=UNLOAD Dec 12 17:26:55.938000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.938000 audit: BPF prog-id=212 op=LOAD Dec 12 17:26:55.938000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe8bd2ae8 a2=94 a3=5 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.938000 audit: BPF prog-id=212 op=UNLOAD Dec 12 17:26:55.938000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.938000 audit: BPF prog-id=213 op=LOAD Dec 12 17:26:55.938000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe8bd2d18 a2=94 a3=6 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.938000 audit: BPF prog-id=213 op=UNLOAD Dec 12 17:26:55.938000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.938000 audit: BPF prog-id=214 op=LOAD Dec 12 17:26:55.938000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe8bd24e8 a2=94 a3=83 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.938000 audit: BPF prog-id=215 op=LOAD Dec 12 17:26:55.938000 audit[5057]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe8bd22a8 a2=94 a3=2 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.938000 audit: BPF prog-id=215 op=UNLOAD Dec 12 17:26:55.938000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.939000 audit: BPF prog-id=214 op=UNLOAD Dec 12 17:26:55.939000 audit[5057]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=2c2e2620 a3=2c2d5b00 items=0 ppid=5024 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:55.947000 audit: BPF prog-id=216 op=LOAD Dec 12 17:26:55.947000 audit[5077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff08969c8 a2=98 a3=fffff08969b8 items=0 ppid=5024 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.947000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:55.947000 audit: BPF prog-id=216 op=UNLOAD Dec 12 17:26:55.947000 audit[5077]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff0896998 a3=0 items=0 ppid=5024 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.947000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:55.947000 audit: BPF prog-id=217 op=LOAD Dec 12 17:26:55.947000 audit[5077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0896878 a2=74 a3=95 items=0 ppid=5024 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.947000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:55.948000 audit: BPF prog-id=217 op=UNLOAD Dec 12 17:26:55.948000 audit[5077]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5024 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.948000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:55.948000 audit: BPF prog-id=218 op=LOAD Dec 12 17:26:55.948000 audit[5077]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff08968a8 a2=40 a3=fffff08968d8 items=0 ppid=5024 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.948000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:55.948000 audit: BPF prog-id=218 op=UNLOAD Dec 12 17:26:55.948000 audit[5077]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff08968d8 items=0 ppid=5024 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:55.948000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:56.152696 systemd-networkd[1689]: vxlan.calico: Link UP Dec 12 17:26:56.152703 systemd-networkd[1689]: vxlan.calico: Gained carrier Dec 12 17:26:56.169000 audit: BPF prog-id=219 op=LOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe45821c8 a2=98 a3=ffffe45821b8 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=219 op=UNLOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe4582198 a3=0 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=220 op=LOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe4581ea8 a2=74 a3=95 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=220 op=UNLOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=221 op=LOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe4581f08 a2=94 a3=2 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=221 op=UNLOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=222 op=LOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe4581d88 a2=40 a3=ffffe4581db8 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=222 op=UNLOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffe4581db8 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=223 op=LOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe4581ed8 a2=94 a3=b7 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=223 op=UNLOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=224 op=LOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe4581588 a2=94 a3=2 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=224 op=UNLOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.169000 audit: BPF prog-id=225 op=LOAD Dec 12 17:26:56.169000 audit[5103]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe4581718 a2=94 a3=30 items=0 ppid=5024 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:56.175000 audit: BPF prog-id=226 op=LOAD Dec 12 17:26:56.175000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3774da8 a2=98 a3=ffffe3774d98 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.175000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.175000 audit: BPF prog-id=226 op=UNLOAD Dec 12 17:26:56.175000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe3774d78 a3=0 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.175000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.175000 audit: BPF prog-id=227 op=LOAD Dec 12 17:26:56.175000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe3774a38 a2=74 a3=95 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.175000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.175000 audit: BPF prog-id=227 op=UNLOAD Dec 12 17:26:56.175000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.175000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.175000 audit: BPF prog-id=228 op=LOAD Dec 12 17:26:56.175000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe3774a98 a2=94 a3=2 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.175000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.175000 audit: BPF prog-id=228 op=UNLOAD Dec 12 17:26:56.175000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.175000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.256000 audit: BPF prog-id=229 op=LOAD Dec 12 17:26:56.256000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe3774a58 a2=40 a3=ffffe3774a88 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.256000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.256000 audit: BPF prog-id=229 op=UNLOAD Dec 12 17:26:56.256000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe3774a88 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.256000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.262000 audit: BPF prog-id=230 op=LOAD Dec 12 17:26:56.262000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe3774a68 a2=94 a3=4 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.262000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.263000 audit: BPF prog-id=230 op=UNLOAD Dec 12 17:26:56.263000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.263000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.263000 audit: BPF prog-id=231 op=LOAD Dec 12 17:26:56.263000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe37748a8 a2=94 a3=5 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.263000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.263000 audit: BPF prog-id=231 op=UNLOAD Dec 12 17:26:56.263000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.263000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.264000 audit: BPF prog-id=232 op=LOAD Dec 12 17:26:56.264000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe3774ad8 a2=94 a3=6 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.264000 audit: BPF prog-id=232 op=UNLOAD Dec 12 17:26:56.264000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.264000 audit: BPF prog-id=233 op=LOAD Dec 12 17:26:56.264000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe37742a8 a2=94 a3=83 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.265000 audit: BPF prog-id=234 op=LOAD Dec 12 17:26:56.265000 audit[5106]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe3774068 a2=94 a3=2 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.265000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.265000 audit: BPF prog-id=234 op=UNLOAD Dec 12 17:26:56.265000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.265000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.265000 audit: BPF prog-id=233 op=UNLOAD Dec 12 17:26:56.265000 audit[5106]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3d2bd620 a3=3d2b0b00 items=0 ppid=5024 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.265000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:56.271000 audit: BPF prog-id=225 op=UNLOAD Dec 12 17:26:56.271000 audit[5024]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000f24040 a2=0 a3=0 items=0 ppid=4810 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.271000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:26:56.408000 audit[5130]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=5130 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:56.408000 audit[5130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffcdfba450 a2=0 a3=ffff9a8f4fa8 items=0 ppid=5024 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.408000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:56.412000 audit[5134]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5134 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:56.412000 audit[5134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffda210040 a2=0 a3=ffffb8391fa8 items=0 ppid=5024 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.412000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:56.416000 audit[5131]: NETFILTER_CFG table=raw:126 family=2 entries=21 op=nft_register_chain pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:56.416000 audit[5131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff5a964d0 a2=0 a3=ffff8ee84fa8 items=0 ppid=5024 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.416000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:56.454000 audit[5137]: NETFILTER_CFG table=filter:127 family=2 entries=94 op=nft_register_chain pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:56.454000 audit[5137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffc6f1d1b0 a2=0 a3=ffff9f958fa8 items=0 ppid=5024 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:56.454000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:57.507991 systemd-networkd[1689]: vxlan.calico: Gained IPv6LL Dec 12 17:26:57.899603 containerd[2108]: time="2025-12-12T17:26:57.899507450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7d8d9766-d9xlz,Uid:e108b102-1b18-4781-b323-af4f0e442eb0,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:26:57.900651 containerd[2108]: time="2025-12-12T17:26:57.899738798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ff9884-zb77j,Uid:3862f9ed-932b-48f5-bc48-007596c724c2,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:58.041811 systemd-networkd[1689]: caliab0e20e54fb: Link UP Dec 12 17:26:58.042677 systemd-networkd[1689]: caliab0e20e54fb: Gained carrier Dec 12 17:26:58.060942 containerd[2108]: 2025-12-12 17:26:57.967 [INFO][5148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0 calico-apiserver-5b7d8d9766- calico-apiserver e108b102-1b18-4781-b323-af4f0e442eb0 801 0 2025-12-12 17:26:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b7d8d9766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-74f46d5ce1 calico-apiserver-5b7d8d9766-d9xlz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliab0e20e54fb [] [] }} ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-d9xlz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-" Dec 12 17:26:58.060942 containerd[2108]: 2025-12-12 17:26:57.968 [INFO][5148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-d9xlz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" Dec 12 17:26:58.060942 containerd[2108]: 2025-12-12 17:26:58.000 [INFO][5172] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" HandleID="k8s-pod-network.747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" Dec 12 17:26:58.061409 containerd[2108]: 2025-12-12 17:26:58.000 [INFO][5172] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" HandleID="k8s-pod-network.747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-74f46d5ce1", "pod":"calico-apiserver-5b7d8d9766-d9xlz", "timestamp":"2025-12-12 17:26:58.000593927 +0000 UTC"}, Hostname:"ci-4515.1.0-a-74f46d5ce1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:26:58.061409 containerd[2108]: 2025-12-12 17:26:58.000 [INFO][5172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:26:58.061409 containerd[2108]: 2025-12-12 17:26:58.000 [INFO][5172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:26:58.061409 containerd[2108]: 2025-12-12 17:26:58.000 [INFO][5172] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-74f46d5ce1' Dec 12 17:26:58.061409 containerd[2108]: 2025-12-12 17:26:58.007 [INFO][5172] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.061409 containerd[2108]: 2025-12-12 17:26:58.011 [INFO][5172] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.061409 containerd[2108]: 2025-12-12 17:26:58.014 [INFO][5172] ipam/ipam.go 511: Trying affinity for 192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.061409 containerd[2108]: 2025-12-12 17:26:58.015 [INFO][5172] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.061409 containerd[2108]: 2025-12-12 17:26:58.017 [INFO][5172] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.061717 containerd[2108]: 2025-12-12 17:26:58.017 [INFO][5172] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.061717 containerd[2108]: 2025-12-12 17:26:58.018 [INFO][5172] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803 Dec 12 17:26:58.061717 containerd[2108]: 2025-12-12 17:26:58.023 [INFO][5172] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.061717 containerd[2108]: 2025-12-12 17:26:58.030 [INFO][5172] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.40.2/26] block=192.168.40.0/26 handle="k8s-pod-network.747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.061717 containerd[2108]: 2025-12-12 17:26:58.030 [INFO][5172] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.2/26] handle="k8s-pod-network.747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.061717 containerd[2108]: 2025-12-12 17:26:58.030 [INFO][5172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:26:58.061717 containerd[2108]: 2025-12-12 17:26:58.031 [INFO][5172] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.40.2/26] IPv6=[] ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" HandleID="k8s-pod-network.747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" Dec 12 17:26:58.061813 containerd[2108]: 2025-12-12 17:26:58.033 [INFO][5148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-d9xlz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0", GenerateName:"calico-apiserver-5b7d8d9766-", Namespace:"calico-apiserver", SelfLink:"", UID:"e108b102-1b18-4781-b323-af4f0e442eb0", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7d8d9766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"", Pod:"calico-apiserver-5b7d8d9766-d9xlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab0e20e54fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:58.061852 containerd[2108]: 2025-12-12 17:26:58.034 [INFO][5148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.2/32] ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-d9xlz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" Dec 12 17:26:58.061852 containerd[2108]: 2025-12-12 17:26:58.034 [INFO][5148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab0e20e54fb ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-d9xlz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" Dec 12 17:26:58.061852 containerd[2108]: 2025-12-12 17:26:58.044 [INFO][5148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-d9xlz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" Dec 12 17:26:58.062299 containerd[2108]: 2025-12-12 17:26:58.044 [INFO][5148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-d9xlz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0", GenerateName:"calico-apiserver-5b7d8d9766-", Namespace:"calico-apiserver", SelfLink:"", UID:"e108b102-1b18-4781-b323-af4f0e442eb0", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7d8d9766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803", Pod:"calico-apiserver-5b7d8d9766-d9xlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab0e20e54fb", MAC:"72:f8:8e:6f:49:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:58.062343 containerd[2108]: 2025-12-12 17:26:58.056 [INFO][5148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-d9xlz" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--d9xlz-eth0" Dec 12 17:26:58.076000 audit[5193]: NETFILTER_CFG table=filter:128 family=2 entries=50 op=nft_register_chain pid=5193 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:58.076000 audit[5193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffd016bf90 a2=0 a3=ffffbdbcdfa8 items=0 ppid=5024 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.076000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:58.115977 containerd[2108]: time="2025-12-12T17:26:58.115476354Z" level=info msg="connecting to shim 747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803" address="unix:///run/containerd/s/69150e2487dab289a41f74fb245454643cfcf70d50bc26b8f6998641a1687197" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:58.145049 systemd[1]: Started cri-containerd-747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803.scope - libcontainer container 747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803. Dec 12 17:26:58.154000 audit: BPF prog-id=235 op=LOAD Dec 12 17:26:58.155000 audit: BPF prog-id=236 op=LOAD Dec 12 17:26:58.155000 audit[5214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734376237316262373466326465336463643336363462366439306333 Dec 12 17:26:58.155000 audit: BPF prog-id=236 op=UNLOAD Dec 12 17:26:58.155000 audit[5214]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734376237316262373466326465336463643336363462366439306333 Dec 12 17:26:58.155000 audit: BPF prog-id=237 op=LOAD Dec 12 17:26:58.155000 audit[5214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734376237316262373466326465336463643336363462366439306333 Dec 12 17:26:58.155000 audit: BPF prog-id=238 op=LOAD Dec 12 17:26:58.155000 audit[5214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734376237316262373466326465336463643336363462366439306333 Dec 12 17:26:58.155000 audit: BPF prog-id=238 op=UNLOAD Dec 12 17:26:58.155000 audit[5214]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734376237316262373466326465336463643336363462366439306333 Dec 12 17:26:58.155000 audit: BPF prog-id=237 op=UNLOAD Dec 12 17:26:58.155000 audit[5214]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734376237316262373466326465336463643336363462366439306333 Dec 12 17:26:58.155000 audit: BPF prog-id=239 op=LOAD Dec 12 17:26:58.155000 audit[5214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5202 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734376237316262373466326465336463643336363462366439306333 Dec 12 17:26:58.182501 containerd[2108]: time="2025-12-12T17:26:58.182006992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7d8d9766-d9xlz,Uid:e108b102-1b18-4781-b323-af4f0e442eb0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"747b71bb74f2de3dcd3664b6d90c37449416bd51522efdc90be647834e52b803\"" Dec 12 17:26:58.185504 containerd[2108]: time="2025-12-12T17:26:58.185296591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:26:58.190843 systemd-networkd[1689]: cali0370e76d085: Link UP Dec 12 17:26:58.192201 systemd-networkd[1689]: cali0370e76d085: Gained carrier Dec 12 17:26:58.211755 containerd[2108]: 2025-12-12 17:26:57.973 [INFO][5157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0 calico-kube-controllers-7cb6ff9884- calico-system 3862f9ed-932b-48f5-bc48-007596c724c2 794 0 2025-12-12 17:26:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cb6ff9884 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-a-74f46d5ce1 calico-kube-controllers-7cb6ff9884-zb77j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0370e76d085 [] [] }} ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ff9884-zb77j" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-" Dec 12 17:26:58.211755 containerd[2108]: 2025-12-12 17:26:57.973 [INFO][5157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ff9884-zb77j" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" Dec 12 17:26:58.211755 containerd[2108]: 2025-12-12 17:26:58.005 [INFO][5178] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" HandleID="k8s-pod-network.0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" Dec 12 17:26:58.212033 containerd[2108]: 2025-12-12 17:26:58.006 [INFO][5178] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" HandleID="k8s-pod-network.0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-74f46d5ce1", "pod":"calico-kube-controllers-7cb6ff9884-zb77j", "timestamp":"2025-12-12 17:26:58.005871067 +0000 UTC"}, Hostname:"ci-4515.1.0-a-74f46d5ce1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:26:58.212033 containerd[2108]: 2025-12-12 17:26:58.006 [INFO][5178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:26:58.212033 containerd[2108]: 2025-12-12 17:26:58.030 [INFO][5178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:26:58.212033 containerd[2108]: 2025-12-12 17:26:58.031 [INFO][5178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-74f46d5ce1' Dec 12 17:26:58.212033 containerd[2108]: 2025-12-12 17:26:58.109 [INFO][5178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.212033 containerd[2108]: 2025-12-12 17:26:58.114 [INFO][5178] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.212033 containerd[2108]: 2025-12-12 17:26:58.120 [INFO][5178] ipam/ipam.go 511: Trying affinity for 192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.212033 containerd[2108]: 2025-12-12 17:26:58.121 [INFO][5178] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.212033 containerd[2108]: 2025-12-12 17:26:58.126 [INFO][5178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.212262 containerd[2108]: 2025-12-12 17:26:58.127 [INFO][5178] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.212262 containerd[2108]: 2025-12-12 17:26:58.129 [INFO][5178] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59 Dec 12 17:26:58.212262 containerd[2108]: 2025-12-12 17:26:58.176 [INFO][5178] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.212262 containerd[2108]: 2025-12-12 17:26:58.184 [INFO][5178] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.40.3/26] block=192.168.40.0/26 handle="k8s-pod-network.0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.212262 containerd[2108]: 2025-12-12 17:26:58.184 [INFO][5178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.3/26] handle="k8s-pod-network.0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:26:58.212262 containerd[2108]: 2025-12-12 17:26:58.184 [INFO][5178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:26:58.212262 containerd[2108]: 2025-12-12 17:26:58.184 [INFO][5178] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.40.3/26] IPv6=[] ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" HandleID="k8s-pod-network.0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" Dec 12 17:26:58.212361 containerd[2108]: 2025-12-12 17:26:58.186 [INFO][5157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ff9884-zb77j" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0", GenerateName:"calico-kube-controllers-7cb6ff9884-", Namespace:"calico-system", SelfLink:"", UID:"3862f9ed-932b-48f5-bc48-007596c724c2", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb6ff9884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"", Pod:"calico-kube-controllers-7cb6ff9884-zb77j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0370e76d085", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:58.212722 containerd[2108]: 2025-12-12 17:26:58.186 [INFO][5157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.3/32] ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ff9884-zb77j" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" Dec 12 17:26:58.212722 containerd[2108]: 2025-12-12 17:26:58.186 [INFO][5157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0370e76d085 ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ff9884-zb77j" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" Dec 12 17:26:58.212722 containerd[2108]: 2025-12-12 17:26:58.193 [INFO][5157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ff9884-zb77j" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" Dec 12 17:26:58.212801 containerd[2108]: 2025-12-12 17:26:58.193 [INFO][5157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ff9884-zb77j" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0", GenerateName:"calico-kube-controllers-7cb6ff9884-", Namespace:"calico-system", SelfLink:"", UID:"3862f9ed-932b-48f5-bc48-007596c724c2", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb6ff9884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59", Pod:"calico-kube-controllers-7cb6ff9884-zb77j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0370e76d085", MAC:"9e:c7:f7:de:bd:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:58.212844 containerd[2108]: 2025-12-12 17:26:58.208 [INFO][5157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ff9884-zb77j" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--kube--controllers--7cb6ff9884--zb77j-eth0" Dec 12 17:26:58.222000 audit[5247]: NETFILTER_CFG table=filter:129 family=2 entries=40 op=nft_register_chain pid=5247 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:58.222000 audit[5247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=fffff39a0f90 a2=0 a3=ffffaec16fa8 items=0 ppid=5024 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.222000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:58.257557 containerd[2108]: time="2025-12-12T17:26:58.256985087Z" level=info msg="connecting to shim 0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59" address="unix:///run/containerd/s/7d1dc0b76fe036142b71e4b3ee5489b5819808c05f4fa60ca04a5555bfb09ff1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:58.279033 systemd[1]: Started cri-containerd-0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59.scope - libcontainer container 0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59. Dec 12 17:26:58.291000 audit: BPF prog-id=240 op=LOAD Dec 12 17:26:58.291000 audit: BPF prog-id=241 op=LOAD Dec 12 17:26:58.291000 audit[5268]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5257 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061323232366338303537623763653466643066353132643133383764 Dec 12 17:26:58.292000 audit: BPF prog-id=241 op=UNLOAD Dec 12 17:26:58.292000 audit[5268]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5257 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061323232366338303537623763653466643066353132643133383764 Dec 12 17:26:58.292000 audit: BPF prog-id=242 op=LOAD Dec 12 17:26:58.292000 audit[5268]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5257 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061323232366338303537623763653466643066353132643133383764 Dec 12 17:26:58.292000 audit: BPF prog-id=243 op=LOAD Dec 12 17:26:58.292000 audit[5268]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5257 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061323232366338303537623763653466643066353132643133383764 Dec 12 17:26:58.292000 audit: BPF prog-id=243 op=UNLOAD Dec 12 17:26:58.292000 audit[5268]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5257 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061323232366338303537623763653466643066353132643133383764 Dec 12 17:26:58.292000 audit: BPF prog-id=242 op=UNLOAD Dec 12 17:26:58.292000 audit[5268]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5257 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061323232366338303537623763653466643066353132643133383764 Dec 12 17:26:58.292000 audit: BPF prog-id=244 op=LOAD Dec 12 17:26:58.292000 audit[5268]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5257 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061323232366338303537623763653466643066353132643133383764 Dec 12 17:26:58.322650 containerd[2108]: time="2025-12-12T17:26:58.322602787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ff9884-zb77j,Uid:3862f9ed-932b-48f5-bc48-007596c724c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a2226c8057b7ce4fd0f512d1387deaed6c9aebb3e7110ea1ccb1b414a4d1b59\"" Dec 12 17:26:58.502791 containerd[2108]: time="2025-12-12T17:26:58.502511217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:58.506499 containerd[2108]: time="2025-12-12T17:26:58.506400026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:26:58.506605 containerd[2108]: time="2025-12-12T17:26:58.506455723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:58.506690 kubelet[3664]: E1212 17:26:58.506649 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:58.507138 kubelet[3664]: E1212 17:26:58.506697 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:58.508141 containerd[2108]: time="2025-12-12T17:26:58.507313092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:26:58.508218 kubelet[3664]: E1212 17:26:58.508053 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw8jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7d8d9766-d9xlz_calico-apiserver(e108b102-1b18-4781-b323-af4f0e442eb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:58.509421 kubelet[3664]: E1212 17:26:58.509388 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:26:58.770029 containerd[2108]: time="2025-12-12T17:26:58.769897448Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:58.777432 containerd[2108]: time="2025-12-12T17:26:58.777376822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:26:58.777620 containerd[2108]: time="2025-12-12T17:26:58.777472936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:58.777831 kubelet[3664]: E1212 17:26:58.777784 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:26:58.777974 kubelet[3664]: E1212 17:26:58.777957 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:26:58.778248 kubelet[3664]: E1212 17:26:58.778156 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7l28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cb6ff9884-zb77j_calico-system(3862f9ed-932b-48f5-bc48-007596c724c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:58.779374 kubelet[3664]: E1212 17:26:58.779332 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:26:59.064432 kubelet[3664]: E1212 17:26:59.064294 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:26:59.067205 kubelet[3664]: E1212 17:26:59.067067 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:26:59.102000 audit[5296]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:59.102000 audit[5296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdc2eb9d0 a2=0 a3=1 items=0 ppid=3812 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:59.109000 audit[5296]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:59.109000 audit[5296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdc2eb9d0 a2=0 a3=1 items=0 ppid=3812 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:59.428057 systemd-networkd[1689]: cali0370e76d085: Gained IPv6LL Dec 12 17:26:59.900091 containerd[2108]: time="2025-12-12T17:26:59.900047828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfcrw,Uid:ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:00.004052 systemd-networkd[1689]: caliab0e20e54fb: Gained IPv6LL Dec 12 17:27:00.060746 systemd-networkd[1689]: calid4786e49079: Link UP Dec 12 17:27:00.061377 systemd-networkd[1689]: calid4786e49079: Gained carrier Dec 12 17:27:00.071091 kubelet[3664]: E1212 17:27:00.070472 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:27:00.071091 kubelet[3664]: E1212 17:27:00.070758 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:27:00.082321 containerd[2108]: 2025-12-12 17:26:59.992 [INFO][5297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0 coredns-668d6bf9bc- kube-system ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6 804 0 2025-12-12 17:26:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-74f46d5ce1 coredns-668d6bf9bc-rfcrw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid4786e49079 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfcrw" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-" Dec 12 17:27:00.082321 containerd[2108]: 2025-12-12 17:26:59.992 [INFO][5297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfcrw" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" Dec 12 17:27:00.082321 containerd[2108]: 2025-12-12 17:27:00.016 [INFO][5309] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" HandleID="k8s-pod-network.f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" Dec 12 17:27:00.082508 containerd[2108]: 2025-12-12 17:27:00.016 [INFO][5309] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" HandleID="k8s-pod-network.f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-74f46d5ce1", "pod":"coredns-668d6bf9bc-rfcrw", "timestamp":"2025-12-12 17:27:00.016716328 +0000 UTC"}, Hostname:"ci-4515.1.0-a-74f46d5ce1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:00.082508 containerd[2108]: 2025-12-12 17:27:00.017 [INFO][5309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:00.082508 containerd[2108]: 2025-12-12 17:27:00.017 [INFO][5309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:00.082508 containerd[2108]: 2025-12-12 17:27:00.017 [INFO][5309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-74f46d5ce1' Dec 12 17:27:00.082508 containerd[2108]: 2025-12-12 17:27:00.022 [INFO][5309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:00.082508 containerd[2108]: 2025-12-12 17:27:00.026 [INFO][5309] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:00.082508 containerd[2108]: 2025-12-12 17:27:00.030 [INFO][5309] ipam/ipam.go 511: Trying affinity for 192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:00.082508 containerd[2108]: 2025-12-12 17:27:00.032 [INFO][5309] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:00.082508 containerd[2108]: 2025-12-12 17:27:00.035 [INFO][5309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:00.084337 containerd[2108]: 2025-12-12 17:27:00.035 [INFO][5309] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:00.084337 containerd[2108]: 2025-12-12 17:27:00.037 [INFO][5309] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8 Dec 12 17:27:00.084337 containerd[2108]: 2025-12-12 17:27:00.043 [INFO][5309] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:00.084337 containerd[2108]: 2025-12-12 17:27:00.053 [INFO][5309] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.40.4/26] block=192.168.40.0/26 handle="k8s-pod-network.f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:00.084337 containerd[2108]: 2025-12-12 17:27:00.053 [INFO][5309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.4/26] handle="k8s-pod-network.f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:00.084337 containerd[2108]: 2025-12-12 17:27:00.053 [INFO][5309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:00.084337 containerd[2108]: 2025-12-12 17:27:00.053 [INFO][5309] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.40.4/26] IPv6=[] ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" HandleID="k8s-pod-network.f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" Dec 12 17:27:00.084459 containerd[2108]: 2025-12-12 17:27:00.056 [INFO][5297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfcrw" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"", Pod:"coredns-668d6bf9bc-rfcrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4786e49079", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:00.084459 containerd[2108]: 2025-12-12 17:27:00.056 [INFO][5297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.4/32] ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfcrw" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" Dec 12 17:27:00.084459 containerd[2108]: 2025-12-12 17:27:00.056 [INFO][5297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4786e49079 ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfcrw" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" Dec 12 17:27:00.084459 containerd[2108]: 2025-12-12 17:27:00.062 [INFO][5297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfcrw" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" Dec 12 17:27:00.084459 containerd[2108]: 2025-12-12 17:27:00.063 [INFO][5297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfcrw" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8", Pod:"coredns-668d6bf9bc-rfcrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4786e49079", MAC:"9e:50:44:15:66:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:00.084459 containerd[2108]: 2025-12-12 17:27:00.076 [INFO][5297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfcrw" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--rfcrw-eth0" Dec 12 17:27:00.114000 audit[5324]: NETFILTER_CFG table=filter:132 family=2 entries=50 op=nft_register_chain pid=5324 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:00.114000 audit[5324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=fffff4265a70 a2=0 a3=ffffbb5c1fa8 items=0 ppid=5024 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.114000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:00.332830 containerd[2108]: time="2025-12-12T17:27:00.332629307Z" level=info msg="connecting to shim f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8" address="unix:///run/containerd/s/20434030d25c1502d00139fcb74c47c0ae18d793c2bac75078f15706670f6f12" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:00.362057 systemd[1]: Started cri-containerd-f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8.scope - libcontainer container f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8. Dec 12 17:27:00.370000 audit: BPF prog-id=245 op=LOAD Dec 12 17:27:00.371000 audit: BPF prog-id=246 op=LOAD Dec 12 17:27:00.371000 audit[5344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5333 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363731643932376132376232343932323264363562343533353639 Dec 12 17:27:00.371000 audit: BPF prog-id=246 op=UNLOAD Dec 12 17:27:00.371000 audit[5344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5333 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363731643932376132376232343932323264363562343533353639 Dec 12 17:27:00.371000 audit: BPF prog-id=247 op=LOAD Dec 12 17:27:00.371000 audit[5344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5333 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363731643932376132376232343932323264363562343533353639 Dec 12 17:27:00.371000 audit: BPF prog-id=248 op=LOAD Dec 12 17:27:00.371000 audit[5344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5333 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363731643932376132376232343932323264363562343533353639 Dec 12 17:27:00.371000 audit: BPF prog-id=248 op=UNLOAD Dec 12 17:27:00.371000 audit[5344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5333 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363731643932376132376232343932323264363562343533353639 Dec 12 17:27:00.371000 audit: BPF prog-id=247 op=UNLOAD Dec 12 17:27:00.371000 audit[5344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5333 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363731643932376132376232343932323264363562343533353639 Dec 12 17:27:00.371000 audit: BPF prog-id=249 op=LOAD Dec 12 17:27:00.371000 audit[5344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5333 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363731643932376132376232343932323264363562343533353639 Dec 12 17:27:00.405500 containerd[2108]: time="2025-12-12T17:27:00.405378578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfcrw,Uid:ccdb2e15-30cf-4c79-a23a-01e0f8e2bcb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8\"" Dec 12 17:27:00.409665 containerd[2108]: time="2025-12-12T17:27:00.409626674Z" level=info msg="CreateContainer within sandbox \"f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:27:00.440877 containerd[2108]: time="2025-12-12T17:27:00.440769911Z" level=info msg="Container a733fb547e1ff6c4a748f2b7e531aa4a92eb29e404f6d89739c3bd8da8ba7326: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:00.459156 containerd[2108]: time="2025-12-12T17:27:00.459108316Z" level=info msg="CreateContainer within sandbox \"f4671d927a27b249222d65b4535699846391b2fe35b2baf6a3d04dad29a706f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a733fb547e1ff6c4a748f2b7e531aa4a92eb29e404f6d89739c3bd8da8ba7326\"" Dec 12 17:27:00.459984 containerd[2108]: time="2025-12-12T17:27:00.459926871Z" level=info msg="StartContainer for \"a733fb547e1ff6c4a748f2b7e531aa4a92eb29e404f6d89739c3bd8da8ba7326\"" Dec 12 17:27:00.461284 containerd[2108]: time="2025-12-12T17:27:00.461253556Z" level=info msg="connecting to shim a733fb547e1ff6c4a748f2b7e531aa4a92eb29e404f6d89739c3bd8da8ba7326" address="unix:///run/containerd/s/20434030d25c1502d00139fcb74c47c0ae18d793c2bac75078f15706670f6f12" protocol=ttrpc version=3 Dec 12 17:27:00.481034 systemd[1]: Started cri-containerd-a733fb547e1ff6c4a748f2b7e531aa4a92eb29e404f6d89739c3bd8da8ba7326.scope - libcontainer container a733fb547e1ff6c4a748f2b7e531aa4a92eb29e404f6d89739c3bd8da8ba7326. Dec 12 17:27:00.490000 audit: BPF prog-id=250 op=LOAD Dec 12 17:27:00.490000 audit: BPF prog-id=251 op=LOAD Dec 12 17:27:00.490000 audit[5370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5333 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333366623534376531666636633461373438663262376535333161 Dec 12 17:27:00.491000 audit: BPF prog-id=251 op=UNLOAD Dec 12 17:27:00.491000 audit[5370]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5333 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333366623534376531666636633461373438663262376535333161 Dec 12 17:27:00.491000 audit: BPF prog-id=252 op=LOAD Dec 12 17:27:00.491000 audit[5370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5333 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333366623534376531666636633461373438663262376535333161 Dec 12 17:27:00.491000 audit: BPF prog-id=253 op=LOAD Dec 12 17:27:00.491000 audit[5370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5333 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333366623534376531666636633461373438663262376535333161 Dec 12 17:27:00.491000 audit: BPF prog-id=253 op=UNLOAD Dec 12 17:27:00.491000 audit[5370]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5333 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333366623534376531666636633461373438663262376535333161 Dec 12 17:27:00.492000 audit: BPF prog-id=252 op=UNLOAD Dec 12 17:27:00.492000 audit[5370]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5333 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333366623534376531666636633461373438663262376535333161 Dec 12 17:27:00.492000 audit: BPF prog-id=254 op=LOAD Dec 12 17:27:00.492000 audit[5370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5333 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333366623534376531666636633461373438663262376535333161 Dec 12 17:27:00.514871 containerd[2108]: time="2025-12-12T17:27:00.514822763Z" level=info msg="StartContainer for \"a733fb547e1ff6c4a748f2b7e531aa4a92eb29e404f6d89739c3bd8da8ba7326\" returns successfully" Dec 12 17:27:00.898578 containerd[2108]: time="2025-12-12T17:27:00.898364450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trqfx,Uid:b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:00.899147 containerd[2108]: time="2025-12-12T17:27:00.899114099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7d8d9766-mzmnd,Uid:ea3fa3f4-48d1-49ff-b968-783d9802a6b3,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:27:00.899250 containerd[2108]: time="2025-12-12T17:27:00.898886221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7g82h,Uid:f432633b-5af0-45f1-805a-f300620eb030,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:01.087778 systemd-networkd[1689]: caliae5e80e4676: Link UP Dec 12 17:27:01.089652 systemd-networkd[1689]: caliae5e80e4676: Gained carrier Dec 12 17:27:01.095843 kubelet[3664]: I1212 17:27:01.094763 3664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rfcrw" podStartSLOduration=47.094744649 podStartE2EDuration="47.094744649s" podCreationTimestamp="2025-12-12 17:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:01.094137659 +0000 UTC m=+53.267410288" watchObservedRunningTime="2025-12-12 17:27:01.094744649 +0000 UTC m=+53.268017278" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:00.996 [INFO][5402] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0 calico-apiserver-5b7d8d9766- calico-apiserver ea3fa3f4-48d1-49ff-b968-783d9802a6b3 805 0 2025-12-12 17:26:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b7d8d9766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-74f46d5ce1 calico-apiserver-5b7d8d9766-mzmnd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae5e80e4676 [] [] }} ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-mzmnd" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:00.996 [INFO][5402] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-mzmnd" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.027 [INFO][5437] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" HandleID="k8s-pod-network.887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.028 [INFO][5437] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" HandleID="k8s-pod-network.887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-74f46d5ce1", "pod":"calico-apiserver-5b7d8d9766-mzmnd", "timestamp":"2025-12-12 17:27:01.027535423 +0000 UTC"}, Hostname:"ci-4515.1.0-a-74f46d5ce1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.029 [INFO][5437] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.030 [INFO][5437] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.030 [INFO][5437] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-74f46d5ce1' Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.039 [INFO][5437] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.045 [INFO][5437] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.049 [INFO][5437] ipam/ipam.go 511: Trying affinity for 192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.051 [INFO][5437] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.054 [INFO][5437] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.054 [INFO][5437] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.056 [INFO][5437] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479 Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.062 [INFO][5437] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.073 [INFO][5437] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.40.5/26] block=192.168.40.0/26 handle="k8s-pod-network.887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.073 [INFO][5437] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.5/26] handle="k8s-pod-network.887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.074 [INFO][5437] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:01.115062 containerd[2108]: 2025-12-12 17:27:01.074 [INFO][5437] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.40.5/26] IPv6=[] ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" HandleID="k8s-pod-network.887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" Dec 12 17:27:01.116371 containerd[2108]: 2025-12-12 17:27:01.078 [INFO][5402] cni-plugin/k8s.go 418: Populated endpoint ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-mzmnd" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0", GenerateName:"calico-apiserver-5b7d8d9766-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea3fa3f4-48d1-49ff-b968-783d9802a6b3", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7d8d9766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"", Pod:"calico-apiserver-5b7d8d9766-mzmnd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae5e80e4676", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:01.116371 containerd[2108]: 2025-12-12 17:27:01.078 [INFO][5402] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.5/32] ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-mzmnd" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" Dec 12 17:27:01.116371 containerd[2108]: 2025-12-12 17:27:01.078 [INFO][5402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae5e80e4676 ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-mzmnd" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" Dec 12 17:27:01.116371 containerd[2108]: 2025-12-12 17:27:01.092 [INFO][5402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-mzmnd" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" Dec 12 17:27:01.116371 containerd[2108]: 2025-12-12 17:27:01.093 [INFO][5402] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-mzmnd" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0", GenerateName:"calico-apiserver-5b7d8d9766-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea3fa3f4-48d1-49ff-b968-783d9802a6b3", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b7d8d9766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479", Pod:"calico-apiserver-5b7d8d9766-mzmnd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae5e80e4676", MAC:"f2:c1:90:dc:d8:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:01.116371 containerd[2108]: 2025-12-12 17:27:01.112 [INFO][5402] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" Namespace="calico-apiserver" Pod="calico-apiserver-5b7d8d9766-mzmnd" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-calico--apiserver--5b7d8d9766--mzmnd-eth0" Dec 12 17:27:01.135000 audit[5466]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.140417 kernel: kauditd_printk_skb: 297 callbacks suppressed Dec 12 17:27:01.140680 kernel: audit: type=1325 audit(1765560421.135:722): table=filter:133 family=2 entries=20 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.135000 audit[5466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe2a43950 a2=0 a3=1 items=0 ppid=3812 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.168206 kernel: audit: type=1300 audit(1765560421.135:722): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe2a43950 a2=0 a3=1 items=0 ppid=3812 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:01.180664 kernel: audit: type=1327 audit(1765560421.135:722): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:01.169000 audit[5466]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.190458 kernel: audit: type=1325 audit(1765560421.169:723): table=nat:134 family=2 entries=14 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.169000 audit[5466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe2a43950 a2=0 a3=1 items=0 ppid=3812 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.212046 kernel: audit: type=1300 audit(1765560421.169:723): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe2a43950 a2=0 a3=1 items=0 ppid=3812 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:01.224512 kernel: audit: type=1327 audit(1765560421.169:723): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:01.227000 audit[5467]: NETFILTER_CFG table=filter:135 family=2 entries=55 op=nft_register_chain pid=5467 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:01.227000 audit[5467]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28304 a0=3 a1=ffffdc9fd760 a2=0 a3=ffffabd59fa8 items=0 ppid=5024 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.259573 kernel: audit: type=1325 audit(1765560421.227:724): table=filter:135 family=2 entries=55 op=nft_register_chain pid=5467 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:01.259713 kernel: audit: type=1300 audit(1765560421.227:724): arch=c00000b7 syscall=211 success=yes exit=28304 a0=3 a1=ffffdc9fd760 a2=0 a3=ffffabd59fa8 items=0 ppid=5024 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.227000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:01.275484 kernel: audit: type=1327 audit(1765560421.227:724): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:01.270000 audit[5469]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=5469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.277703 systemd-networkd[1689]: califb8793e2324: Link UP Dec 12 17:27:01.280684 systemd-networkd[1689]: califb8793e2324: Gained carrier Dec 12 17:27:01.287454 kernel: audit: type=1325 audit(1765560421.270:725): table=filter:136 family=2 entries=17 op=nft_register_rule pid=5469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.270000 audit[5469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc49bda50 a2=0 a3=1 items=0 ppid=3812 pid=5469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:01.309000 audit[5469]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=5469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.309000 audit[5469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc49bda50 a2=0 a3=1 items=0 ppid=3812 pid=5469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:00.999 [INFO][5413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0 csi-node-driver- calico-system b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b 696 0 2025-12-12 17:26:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-a-74f46d5ce1 csi-node-driver-trqfx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califb8793e2324 [] [] }} ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Namespace="calico-system" Pod="csi-node-driver-trqfx" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:00.999 [INFO][5413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Namespace="calico-system" Pod="csi-node-driver-trqfx" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.058 [INFO][5444] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" HandleID="k8s-pod-network.ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.058 [INFO][5444] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" HandleID="k8s-pod-network.ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-74f46d5ce1", "pod":"csi-node-driver-trqfx", "timestamp":"2025-12-12 17:27:01.058058374 +0000 UTC"}, Hostname:"ci-4515.1.0-a-74f46d5ce1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.058 [INFO][5444] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.074 [INFO][5444] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.074 [INFO][5444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-74f46d5ce1' Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.141 [INFO][5444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.153 [INFO][5444] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.181 [INFO][5444] ipam/ipam.go 511: Trying affinity for 192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.192 [INFO][5444] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.214 [INFO][5444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.214 [INFO][5444] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.216 [INFO][5444] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.225 [INFO][5444] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.242 [INFO][5444] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.40.6/26] block=192.168.40.0/26 handle="k8s-pod-network.ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.242 [INFO][5444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.6/26] handle="k8s-pod-network.ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.242 [INFO][5444] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:01.318425 containerd[2108]: 2025-12-12 17:27:01.242 [INFO][5444] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.40.6/26] IPv6=[] ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" HandleID="k8s-pod-network.ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" Dec 12 17:27:01.318852 containerd[2108]: 2025-12-12 17:27:01.262 [INFO][5413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Namespace="calico-system" Pod="csi-node-driver-trqfx" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"", Pod:"csi-node-driver-trqfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb8793e2324", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:01.318852 containerd[2108]: 2025-12-12 17:27:01.264 [INFO][5413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.6/32] ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Namespace="calico-system" Pod="csi-node-driver-trqfx" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" Dec 12 17:27:01.318852 containerd[2108]: 2025-12-12 17:27:01.264 [INFO][5413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb8793e2324 ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Namespace="calico-system" Pod="csi-node-driver-trqfx" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" Dec 12 17:27:01.318852 containerd[2108]: 2025-12-12 17:27:01.277 [INFO][5413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Namespace="calico-system" Pod="csi-node-driver-trqfx" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" Dec 12 17:27:01.318852 containerd[2108]: 2025-12-12 17:27:01.279 [INFO][5413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Namespace="calico-system" Pod="csi-node-driver-trqfx" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f", Pod:"csi-node-driver-trqfx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb8793e2324", MAC:"6a:34:0a:bb:16:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:01.318852 containerd[2108]: 2025-12-12 17:27:01.308 [INFO][5413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" Namespace="calico-system" Pod="csi-node-driver-trqfx" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-csi--node--driver--trqfx-eth0" Dec 12 17:27:01.329092 containerd[2108]: time="2025-12-12T17:27:01.328701366Z" level=info msg="connecting to shim 887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479" address="unix:///run/containerd/s/c56afa85b1555572c81671e17f1119ef34ef6b49a3aec651ef42bd3c43a4caba" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:01.363094 systemd[1]: Started cri-containerd-887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479.scope - libcontainer container 887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479. Dec 12 17:27:01.385636 systemd-networkd[1689]: cali4b2b30f217d: Link UP Dec 12 17:27:01.389000 audit[5519]: NETFILTER_CFG table=filter:138 family=2 entries=48 op=nft_register_chain pid=5519 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:01.390498 systemd-networkd[1689]: cali4b2b30f217d: Gained carrier Dec 12 17:27:01.389000 audit[5519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=fffff6519600 a2=0 a3=ffff9d799fa8 items=0 ppid=5024 pid=5519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.389000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:01.407774 containerd[2108]: time="2025-12-12T17:27:01.407733162Z" level=info msg="connecting to shim ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f" address="unix:///run/containerd/s/f8fb858cda1a037ca85f59d27cff5f25aafd1e868299727a03be7157906a654f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:01.417000 audit: BPF prog-id=255 op=LOAD Dec 12 17:27:01.418000 audit: BPF prog-id=256 op=LOAD Dec 12 17:27:01.418000 audit[5497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=5481 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838376330306532623763616462316231383136643566316538643433 Dec 12 17:27:01.418000 audit: BPF prog-id=256 op=UNLOAD Dec 12 17:27:01.418000 audit[5497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5481 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838376330306532623763616462316231383136643566316538643433 Dec 12 17:27:01.418000 audit: BPF prog-id=257 op=LOAD Dec 12 17:27:01.418000 audit[5497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=5481 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838376330306532623763616462316231383136643566316538643433 Dec 12 17:27:01.418000 audit: BPF prog-id=258 op=LOAD Dec 12 17:27:01.418000 audit[5497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=5481 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838376330306532623763616462316231383136643566316538643433 Dec 12 17:27:01.418000 audit: BPF prog-id=258 op=UNLOAD Dec 12 17:27:01.418000 audit[5497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5481 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838376330306532623763616462316231383136643566316538643433 Dec 12 17:27:01.418000 audit: BPF prog-id=257 op=UNLOAD Dec 12 17:27:01.418000 audit[5497]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5481 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838376330306532623763616462316231383136643566316538643433 Dec 12 17:27:01.418000 audit: BPF prog-id=259 op=LOAD Dec 12 17:27:01.418000 audit[5497]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=5481 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838376330306532623763616462316231383136643566316538643433 Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.020 [INFO][5419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0 coredns-668d6bf9bc- kube-system f432633b-5af0-45f1-805a-f300620eb030 806 0 2025-12-12 17:26:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-74f46d5ce1 coredns-668d6bf9bc-7g82h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4b2b30f217d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-7g82h" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.020 [INFO][5419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-7g82h" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.059 [INFO][5451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" HandleID="k8s-pod-network.13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.060 [INFO][5451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" HandleID="k8s-pod-network.13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-74f46d5ce1", "pod":"coredns-668d6bf9bc-7g82h", "timestamp":"2025-12-12 17:27:01.059735412 +0000 UTC"}, Hostname:"ci-4515.1.0-a-74f46d5ce1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.060 [INFO][5451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.242 [INFO][5451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.242 [INFO][5451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-74f46d5ce1' Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.290 [INFO][5451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.317 [INFO][5451] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.329 [INFO][5451] ipam/ipam.go 511: Trying affinity for 192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.332 [INFO][5451] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.334 [INFO][5451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.334 [INFO][5451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.338 [INFO][5451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.361 [INFO][5451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.374 [INFO][5451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.40.7/26] block=192.168.40.0/26 handle="k8s-pod-network.13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.374 [INFO][5451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.7/26] handle="k8s-pod-network.13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.374 [INFO][5451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:01.425046 containerd[2108]: 2025-12-12 17:27:01.374 [INFO][5451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.40.7/26] IPv6=[] ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" HandleID="k8s-pod-network.13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" Dec 12 17:27:01.425483 containerd[2108]: 2025-12-12 17:27:01.376 [INFO][5419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-7g82h" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f432633b-5af0-45f1-805a-f300620eb030", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"", Pod:"coredns-668d6bf9bc-7g82h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b2b30f217d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:01.425483 containerd[2108]: 2025-12-12 17:27:01.376 [INFO][5419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.7/32] ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-7g82h" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" Dec 12 17:27:01.425483 containerd[2108]: 2025-12-12 17:27:01.377 [INFO][5419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b2b30f217d ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-7g82h" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" Dec 12 17:27:01.425483 containerd[2108]: 2025-12-12 17:27:01.389 [INFO][5419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-7g82h" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" Dec 12 17:27:01.425483 containerd[2108]: 2025-12-12 17:27:01.391 [INFO][5419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-7g82h" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f432633b-5af0-45f1-805a-f300620eb030", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf", Pod:"coredns-668d6bf9bc-7g82h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b2b30f217d", MAC:"76:97:c8:ce:ed:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:01.425483 containerd[2108]: 2025-12-12 17:27:01.409 [INFO][5419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" Namespace="kube-system" Pod="coredns-668d6bf9bc-7g82h" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-coredns--668d6bf9bc--7g82h-eth0" Dec 12 17:27:01.465090 systemd[1]: Started cri-containerd-ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f.scope - libcontainer container ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f. Dec 12 17:27:01.486619 containerd[2108]: time="2025-12-12T17:27:01.486562818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b7d8d9766-mzmnd,Uid:ea3fa3f4-48d1-49ff-b968-783d9802a6b3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"887c00e2b7cadb1b1816d5f1e8d43b3bda491ebff5d44be44ee2fbd1526cf479\"" Dec 12 17:27:01.491309 containerd[2108]: time="2025-12-12T17:27:01.491271252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:01.496285 containerd[2108]: time="2025-12-12T17:27:01.496051655Z" level=info msg="connecting to shim 13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf" address="unix:///run/containerd/s/7015b9fd2e438c9d0d32cfae391ab6a20189c9e3cf9b4a8f7aed07028d4ae943" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:01.508000 audit: BPF prog-id=260 op=LOAD Dec 12 17:27:01.509000 audit: BPF prog-id=261 op=LOAD Dec 12 17:27:01.509000 audit[5547]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001fe180 a2=98 a3=0 items=0 ppid=5530 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164396135373936323962303036363636303861306364363339346439 Dec 12 17:27:01.509000 audit: BPF prog-id=261 op=UNLOAD Dec 12 17:27:01.509000 audit[5547]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164396135373936323962303036363636303861306364363339346439 Dec 12 17:27:01.509000 audit: BPF prog-id=262 op=LOAD Dec 12 17:27:01.509000 audit[5547]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001fe3e8 a2=98 a3=0 items=0 ppid=5530 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164396135373936323962303036363636303861306364363339346439 Dec 12 17:27:01.509000 audit: BPF prog-id=263 op=LOAD Dec 12 17:27:01.509000 audit[5547]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001fe168 a2=98 a3=0 items=0 ppid=5530 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164396135373936323962303036363636303861306364363339346439 Dec 12 17:27:01.509000 audit: BPF prog-id=263 op=UNLOAD Dec 12 17:27:01.509000 audit[5547]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164396135373936323962303036363636303861306364363339346439 Dec 12 17:27:01.509000 audit: BPF prog-id=262 op=UNLOAD Dec 12 17:27:01.509000 audit[5547]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164396135373936323962303036363636303861306364363339346439 Dec 12 17:27:01.509000 audit: BPF prog-id=264 op=LOAD Dec 12 17:27:01.509000 audit[5547]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001fe648 a2=98 a3=0 items=0 ppid=5530 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164396135373936323962303036363636303861306364363339346439 Dec 12 17:27:01.518000 audit[5593]: NETFILTER_CFG table=filter:139 family=2 entries=48 op=nft_register_chain pid=5593 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:01.518000 audit[5593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22704 a0=3 a1=ffffcc664830 a2=0 a3=ffffab492fa8 items=0 ppid=5024 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.518000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:01.530210 systemd[1]: Started cri-containerd-13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf.scope - libcontainer container 13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf. Dec 12 17:27:01.541196 containerd[2108]: time="2025-12-12T17:27:01.541152567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trqfx,Uid:b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad9a579629b00666608a0cd6394d9534ec419140218fb2453eb7313deb44061f\"" Dec 12 17:27:01.545000 audit: BPF prog-id=265 op=LOAD Dec 12 17:27:01.546000 audit: BPF prog-id=266 op=LOAD Dec 12 17:27:01.546000 audit[5600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5588 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623338613737303464316233666634323131646638346336626166 Dec 12 17:27:01.546000 audit: BPF prog-id=266 op=UNLOAD Dec 12 17:27:01.546000 audit[5600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5588 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623338613737303464316233666634323131646638346336626166 Dec 12 17:27:01.546000 audit: BPF prog-id=267 op=LOAD Dec 12 17:27:01.546000 audit[5600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5588 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623338613737303464316233666634323131646638346336626166 Dec 12 17:27:01.547000 audit: BPF prog-id=268 op=LOAD Dec 12 17:27:01.547000 audit[5600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5588 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623338613737303464316233666634323131646638346336626166 Dec 12 17:27:01.547000 audit: BPF prog-id=268 op=UNLOAD Dec 12 17:27:01.547000 audit[5600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5588 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623338613737303464316233666634323131646638346336626166 Dec 12 17:27:01.547000 audit: BPF prog-id=267 op=UNLOAD Dec 12 17:27:01.547000 audit[5600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5588 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623338613737303464316233666634323131646638346336626166 Dec 12 17:27:01.547000 audit: BPF prog-id=269 op=LOAD Dec 12 17:27:01.547000 audit[5600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5588 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133623338613737303464316233666634323131646638346336626166 Dec 12 17:27:01.584600 containerd[2108]: time="2025-12-12T17:27:01.584557097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7g82h,Uid:f432633b-5af0-45f1-805a-f300620eb030,Namespace:kube-system,Attempt:0,} returns sandbox id \"13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf\"" Dec 12 17:27:01.588088 containerd[2108]: time="2025-12-12T17:27:01.588054608Z" level=info msg="CreateContainer within sandbox \"13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:27:01.608198 containerd[2108]: time="2025-12-12T17:27:01.608152748Z" level=info msg="Container e2f83708384c4d9d1c9063f7659c3e9abc77faa65e993313e8b78b24244030f0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:01.628503 containerd[2108]: time="2025-12-12T17:27:01.628454638Z" level=info msg="CreateContainer within sandbox \"13b38a7704d1b3ff4211df84c6bafabe1c5520eaaf8194ffb7800f721f7c5fcf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2f83708384c4d9d1c9063f7659c3e9abc77faa65e993313e8b78b24244030f0\"" Dec 12 17:27:01.629181 containerd[2108]: time="2025-12-12T17:27:01.629148061Z" level=info msg="StartContainer for \"e2f83708384c4d9d1c9063f7659c3e9abc77faa65e993313e8b78b24244030f0\"" Dec 12 17:27:01.630929 containerd[2108]: time="2025-12-12T17:27:01.630896701Z" level=info msg="connecting to shim e2f83708384c4d9d1c9063f7659c3e9abc77faa65e993313e8b78b24244030f0" address="unix:///run/containerd/s/7015b9fd2e438c9d0d32cfae391ab6a20189c9e3cf9b4a8f7aed07028d4ae943" protocol=ttrpc version=3 Dec 12 17:27:01.654083 systemd[1]: Started cri-containerd-e2f83708384c4d9d1c9063f7659c3e9abc77faa65e993313e8b78b24244030f0.scope - libcontainer container e2f83708384c4d9d1c9063f7659c3e9abc77faa65e993313e8b78b24244030f0. Dec 12 17:27:01.663000 audit: BPF prog-id=270 op=LOAD Dec 12 17:27:01.664000 audit: BPF prog-id=271 op=LOAD Dec 12 17:27:01.664000 audit[5632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5588 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532663833373038333834633464396431633930363366373635396333 Dec 12 17:27:01.664000 audit: BPF prog-id=271 op=UNLOAD Dec 12 17:27:01.664000 audit[5632]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5588 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532663833373038333834633464396431633930363366373635396333 Dec 12 17:27:01.664000 audit: BPF prog-id=272 op=LOAD Dec 12 17:27:01.664000 audit[5632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5588 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532663833373038333834633464396431633930363366373635396333 Dec 12 17:27:01.664000 audit: BPF prog-id=273 op=LOAD Dec 12 17:27:01.664000 audit[5632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5588 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532663833373038333834633464396431633930363366373635396333 Dec 12 17:27:01.664000 audit: BPF prog-id=273 op=UNLOAD Dec 12 17:27:01.664000 audit[5632]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5588 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532663833373038333834633464396431633930363366373635396333 Dec 12 17:27:01.664000 audit: BPF prog-id=272 op=UNLOAD Dec 12 17:27:01.664000 audit[5632]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5588 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532663833373038333834633464396431633930363366373635396333 Dec 12 17:27:01.664000 audit: BPF prog-id=274 op=LOAD Dec 12 17:27:01.664000 audit[5632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5588 pid=5632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532663833373038333834633464396431633930363366373635396333 Dec 12 17:27:01.687885 containerd[2108]: time="2025-12-12T17:27:01.686850361Z" level=info msg="StartContainer for \"e2f83708384c4d9d1c9063f7659c3e9abc77faa65e993313e8b78b24244030f0\" returns successfully" Dec 12 17:27:01.791593 containerd[2108]: time="2025-12-12T17:27:01.791542495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:01.795734 containerd[2108]: time="2025-12-12T17:27:01.795565170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:01.795734 containerd[2108]: time="2025-12-12T17:27:01.795605266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:01.796174 kubelet[3664]: E1212 17:27:01.796103 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:01.796337 kubelet[3664]: E1212 17:27:01.796157 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:01.796802 containerd[2108]: time="2025-12-12T17:27:01.796715203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:27:01.797081 kubelet[3664]: E1212 17:27:01.797040 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7slkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7d8d9766-mzmnd_calico-apiserver(ea3fa3f4-48d1-49ff-b968-783d9802a6b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:01.799183 kubelet[3664]: E1212 17:27:01.799133 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:27:01.899277 containerd[2108]: time="2025-12-12T17:27:01.899214616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8dz6b,Uid:444315f1-eff9-48ed-a7dc-c9b319819cb8,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:01.924106 systemd-networkd[1689]: calid4786e49079: Gained IPv6LL Dec 12 17:27:02.007073 systemd-networkd[1689]: cali4383a50c201: Link UP Dec 12 17:27:02.008526 systemd-networkd[1689]: cali4383a50c201: Gained carrier Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.939 [INFO][5665] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0 goldmane-666569f655- calico-system 444315f1-eff9-48ed-a7dc-c9b319819cb8 803 0 2025-12-12 17:26:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-a-74f46d5ce1 goldmane-666569f655-8dz6b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4383a50c201 [] [] }} ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Namespace="calico-system" Pod="goldmane-666569f655-8dz6b" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.939 [INFO][5665] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Namespace="calico-system" Pod="goldmane-666569f655-8dz6b" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.961 [INFO][5676] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" HandleID="k8s-pod-network.a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.962 [INFO][5676] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" HandleID="k8s-pod-network.a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cefe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-74f46d5ce1", "pod":"goldmane-666569f655-8dz6b", "timestamp":"2025-12-12 17:27:01.96153674 +0000 UTC"}, Hostname:"ci-4515.1.0-a-74f46d5ce1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.962 [INFO][5676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.962 [INFO][5676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.962 [INFO][5676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-74f46d5ce1' Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.972 [INFO][5676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.976 [INFO][5676] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.979 [INFO][5676] ipam/ipam.go 511: Trying affinity for 192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.981 [INFO][5676] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.983 [INFO][5676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.983 [INFO][5676] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.984 [INFO][5676] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532 Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:01.992 [INFO][5676] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:02.000 [INFO][5676] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.40.8/26] block=192.168.40.0/26 handle="k8s-pod-network.a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:02.000 [INFO][5676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.8/26] handle="k8s-pod-network.a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" host="ci-4515.1.0-a-74f46d5ce1" Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:02.001 [INFO][5676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:02.026996 containerd[2108]: 2025-12-12 17:27:02.001 [INFO][5676] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.40.8/26] IPv6=[] ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" HandleID="k8s-pod-network.a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Workload="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" Dec 12 17:27:02.027422 containerd[2108]: 2025-12-12 17:27:02.003 [INFO][5665] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Namespace="calico-system" Pod="goldmane-666569f655-8dz6b" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"444315f1-eff9-48ed-a7dc-c9b319819cb8", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"", Pod:"goldmane-666569f655-8dz6b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4383a50c201", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:02.027422 containerd[2108]: 2025-12-12 17:27:02.003 [INFO][5665] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.8/32] ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Namespace="calico-system" Pod="goldmane-666569f655-8dz6b" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" Dec 12 17:27:02.027422 containerd[2108]: 2025-12-12 17:27:02.003 [INFO][5665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4383a50c201 ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Namespace="calico-system" Pod="goldmane-666569f655-8dz6b" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" Dec 12 17:27:02.027422 containerd[2108]: 2025-12-12 17:27:02.009 [INFO][5665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Namespace="calico-system" Pod="goldmane-666569f655-8dz6b" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" Dec 12 17:27:02.027422 containerd[2108]: 2025-12-12 17:27:02.010 [INFO][5665] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Namespace="calico-system" Pod="goldmane-666569f655-8dz6b" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"444315f1-eff9-48ed-a7dc-c9b319819cb8", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-74f46d5ce1", ContainerID:"a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532", Pod:"goldmane-666569f655-8dz6b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4383a50c201", MAC:"da:f9:99:fb:19:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:02.027422 containerd[2108]: 2025-12-12 17:27:02.024 [INFO][5665] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" Namespace="calico-system" Pod="goldmane-666569f655-8dz6b" WorkloadEndpoint="ci--4515.1.0--a--74f46d5ce1-k8s-goldmane--666569f655--8dz6b-eth0" Dec 12 17:27:02.037000 audit[5691]: NETFILTER_CFG table=filter:140 family=2 entries=70 op=nft_register_chain pid=5691 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:02.037000 audit[5691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=33956 a0=3 a1=ffffe1766a70 a2=0 a3=ffffa9318fa8 items=0 ppid=5024 pid=5691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.037000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:02.076462 containerd[2108]: time="2025-12-12T17:27:02.076409684Z" level=info msg="connecting to shim a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532" address="unix:///run/containerd/s/9a9809cc576af78c3cda3b4f31bfb475c8cead7409cd8ec4942dec7f7fdf11af" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:02.085602 containerd[2108]: time="2025-12-12T17:27:02.085558232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:02.085843 kubelet[3664]: E1212 17:27:02.085809 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:27:02.094278 containerd[2108]: time="2025-12-12T17:27:02.094207089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:27:02.094913 containerd[2108]: time="2025-12-12T17:27:02.094303163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:02.095235 kubelet[3664]: E1212 17:27:02.095101 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:02.095235 kubelet[3664]: E1212 17:27:02.095177 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:02.095506 kubelet[3664]: E1212 17:27:02.095459 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tt2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:02.099881 containerd[2108]: time="2025-12-12T17:27:02.099609641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:27:02.112527 kubelet[3664]: I1212 17:27:02.112475 3664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7g82h" podStartSLOduration=48.112458647 podStartE2EDuration="48.112458647s" podCreationTimestamp="2025-12-12 17:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:02.110588949 +0000 UTC m=+54.283861578" watchObservedRunningTime="2025-12-12 17:27:02.112458647 +0000 UTC m=+54.285731276" Dec 12 17:27:02.139434 systemd[1]: Started cri-containerd-a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532.scope - libcontainer container a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532. Dec 12 17:27:02.172000 audit: BPF prog-id=275 op=LOAD Dec 12 17:27:02.173000 audit: BPF prog-id=276 op=LOAD Dec 12 17:27:02.173000 audit[5712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5700 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323131663565343236303162383466316161623431313433353963 Dec 12 17:27:02.174000 audit: BPF prog-id=276 op=UNLOAD Dec 12 17:27:02.174000 audit[5712]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5700 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323131663565343236303162383466316161623431313433353963 Dec 12 17:27:02.174000 audit: BPF prog-id=277 op=LOAD Dec 12 17:27:02.174000 audit[5712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5700 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323131663565343236303162383466316161623431313433353963 Dec 12 17:27:02.175000 audit: BPF prog-id=278 op=LOAD Dec 12 17:27:02.175000 audit[5712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5700 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323131663565343236303162383466316161623431313433353963 Dec 12 17:27:02.175000 audit: BPF prog-id=278 op=UNLOAD Dec 12 17:27:02.175000 audit[5712]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5700 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323131663565343236303162383466316161623431313433353963 Dec 12 17:27:02.175000 audit: BPF prog-id=277 op=UNLOAD Dec 12 17:27:02.175000 audit[5712]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5700 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323131663565343236303162383466316161623431313433353963 Dec 12 17:27:02.175000 audit: BPF prog-id=279 op=LOAD Dec 12 17:27:02.175000 audit[5712]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5700 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133323131663565343236303162383466316161623431313433353963 Dec 12 17:27:02.204000 audit[5732]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:02.204000 audit[5732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffca2d5490 a2=0 a3=1 items=0 ppid=3812 pid=5732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:02.212000 audit[5732]: NETFILTER_CFG table=nat:142 family=2 entries=44 op=nft_register_rule pid=5732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:02.212000 audit[5732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffca2d5490 a2=0 a3=1 items=0 ppid=3812 pid=5732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:02.216421 containerd[2108]: time="2025-12-12T17:27:02.216203997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8dz6b,Uid:444315f1-eff9-48ed-a7dc-c9b319819cb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3211f5e42601b84f1aab4114359c9665647d7a62ff16ac98a86abc6aeccb532\"" Dec 12 17:27:02.542287 containerd[2108]: time="2025-12-12T17:27:02.542231736Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:02.545999 containerd[2108]: time="2025-12-12T17:27:02.545852680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:27:02.545999 containerd[2108]: time="2025-12-12T17:27:02.545912209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:02.546632 kubelet[3664]: E1212 17:27:02.546105 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:02.546632 kubelet[3664]: E1212 17:27:02.546160 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:02.546632 kubelet[3664]: E1212 17:27:02.546407 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tt2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:02.547784 containerd[2108]: time="2025-12-12T17:27:02.547625256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:27:02.548077 kubelet[3664]: E1212 17:27:02.547677 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:27:02.564155 systemd-networkd[1689]: caliae5e80e4676: Gained IPv6LL Dec 12 17:27:02.756097 systemd-networkd[1689]: califb8793e2324: Gained IPv6LL Dec 12 17:27:02.814985 containerd[2108]: time="2025-12-12T17:27:02.814837957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:02.819947 containerd[2108]: time="2025-12-12T17:27:02.819897366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:27:02.820170 containerd[2108]: time="2025-12-12T17:27:02.819992584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:02.820206 kubelet[3664]: E1212 17:27:02.820159 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:02.820237 kubelet[3664]: E1212 17:27:02.820208 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:02.820353 kubelet[3664]: E1212 17:27:02.820320 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rhps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8dz6b_calico-system(444315f1-eff9-48ed-a7dc-c9b319819cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:02.821675 kubelet[3664]: E1212 17:27:02.821561 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:27:03.090363 kubelet[3664]: E1212 17:27:03.089900 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:27:03.090363 kubelet[3664]: E1212 17:27:03.089910 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:27:03.090363 kubelet[3664]: E1212 17:27:03.089973 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:27:03.140214 systemd-networkd[1689]: cali4b2b30f217d: Gained IPv6LL Dec 12 17:27:03.150000 audit[5741]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5741 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:03.150000 audit[5741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff9e51a10 a2=0 a3=1 items=0 ppid=3812 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.150000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:03.198000 audit[5741]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5741 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:03.198000 audit[5741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff9e51a10 a2=0 a3=1 items=0 ppid=3812 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:03.780054 systemd-networkd[1689]: cali4383a50c201: Gained IPv6LL Dec 12 17:27:04.090019 kubelet[3664]: E1212 17:27:04.089811 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:27:09.900395 containerd[2108]: time="2025-12-12T17:27:09.900352092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:27:10.172284 containerd[2108]: time="2025-12-12T17:27:10.171932551Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:10.176047 containerd[2108]: time="2025-12-12T17:27:10.175944215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:27:10.176047 containerd[2108]: time="2025-12-12T17:27:10.175999600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:10.176323 kubelet[3664]: E1212 17:27:10.176178 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:10.176763 kubelet[3664]: E1212 17:27:10.176397 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:10.177246 kubelet[3664]: E1212 17:27:10.176842 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f4c1e637181e476989824504d7e7830e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-clnww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74445c9dcb-4pqcz_calico-system(202224df-4814-4cd7-bd50-d9bc16a19fb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:10.179790 containerd[2108]: time="2025-12-12T17:27:10.179758874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:27:10.446185 containerd[2108]: time="2025-12-12T17:27:10.445792215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:10.449965 containerd[2108]: time="2025-12-12T17:27:10.449912721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:27:10.450064 containerd[2108]: time="2025-12-12T17:27:10.450008651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:10.450280 kubelet[3664]: E1212 17:27:10.450228 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:10.451324 kubelet[3664]: E1212 17:27:10.450283 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:10.451324 kubelet[3664]: E1212 17:27:10.450380 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clnww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74445c9dcb-4pqcz_calico-system(202224df-4814-4cd7-bd50-d9bc16a19fb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:10.451528 kubelet[3664]: E1212 17:27:10.451497 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:27:10.900919 containerd[2108]: time="2025-12-12T17:27:10.900873761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:11.208082 containerd[2108]: time="2025-12-12T17:27:11.207949241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:11.212968 containerd[2108]: time="2025-12-12T17:27:11.212915470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:11.213069 containerd[2108]: time="2025-12-12T17:27:11.213015424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:11.213289 kubelet[3664]: E1212 17:27:11.213231 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:11.213920 kubelet[3664]: E1212 17:27:11.213610 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:11.213920 kubelet[3664]: E1212 17:27:11.213758 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw8jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7d8d9766-d9xlz_calico-apiserver(e108b102-1b18-4781-b323-af4f0e442eb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:11.215056 kubelet[3664]: E1212 17:27:11.215010 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:27:12.899850 containerd[2108]: time="2025-12-12T17:27:12.899799612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:27:13.158996 containerd[2108]: time="2025-12-12T17:27:13.158847591Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:13.163906 containerd[2108]: time="2025-12-12T17:27:13.163823532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:27:13.164063 containerd[2108]: time="2025-12-12T17:27:13.163936335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:13.164282 kubelet[3664]: E1212 17:27:13.164236 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:13.164282 kubelet[3664]: E1212 17:27:13.164289 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:13.164840 kubelet[3664]: E1212 17:27:13.164787 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7l28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cb6ff9884-zb77j_calico-system(3862f9ed-932b-48f5-bc48-007596c724c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:13.166893 kubelet[3664]: E1212 17:27:13.166578 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:27:15.899407 containerd[2108]: time="2025-12-12T17:27:15.899151748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:27:16.173193 containerd[2108]: time="2025-12-12T17:27:16.172917682Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:16.179026 containerd[2108]: time="2025-12-12T17:27:16.178940574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:27:16.179333 containerd[2108]: time="2025-12-12T17:27:16.179159322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:16.179423 kubelet[3664]: E1212 17:27:16.179369 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:16.179718 kubelet[3664]: E1212 17:27:16.179432 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:16.179718 kubelet[3664]: E1212 17:27:16.179546 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rhps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8dz6b_calico-system(444315f1-eff9-48ed-a7dc-c9b319819cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:16.181052 kubelet[3664]: E1212 17:27:16.181021 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:27:16.900108 containerd[2108]: time="2025-12-12T17:27:16.899956451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:17.154275 containerd[2108]: time="2025-12-12T17:27:17.153994961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:17.157790 containerd[2108]: time="2025-12-12T17:27:17.157679281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:17.157904 containerd[2108]: time="2025-12-12T17:27:17.157700618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:17.157995 kubelet[3664]: E1212 17:27:17.157947 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:17.158059 kubelet[3664]: E1212 17:27:17.158000 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:17.158289 kubelet[3664]: E1212 17:27:17.158240 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7slkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7d8d9766-mzmnd_calico-apiserver(ea3fa3f4-48d1-49ff-b968-783d9802a6b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:17.158813 containerd[2108]: time="2025-12-12T17:27:17.158783297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:27:17.160390 kubelet[3664]: E1212 17:27:17.160350 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:27:17.416000 containerd[2108]: time="2025-12-12T17:27:17.415644157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:17.419263 containerd[2108]: time="2025-12-12T17:27:17.419218852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:27:17.419410 containerd[2108]: time="2025-12-12T17:27:17.419243108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:17.419484 kubelet[3664]: E1212 17:27:17.419440 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:17.419821 kubelet[3664]: E1212 17:27:17.419490 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:17.419821 kubelet[3664]: E1212 17:27:17.419594 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tt2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:17.421987 containerd[2108]: time="2025-12-12T17:27:17.421959352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:27:17.671908 containerd[2108]: time="2025-12-12T17:27:17.671607125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:17.675430 containerd[2108]: time="2025-12-12T17:27:17.675322951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:27:17.675430 containerd[2108]: time="2025-12-12T17:27:17.675376648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:17.675726 kubelet[3664]: E1212 17:27:17.675672 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:17.675813 kubelet[3664]: E1212 17:27:17.675728 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:17.676000 kubelet[3664]: E1212 17:27:17.675836 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tt2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:17.677274 kubelet[3664]: E1212 17:27:17.677152 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:27:21.901212 kubelet[3664]: E1212 17:27:21.901161 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:27:22.901788 kubelet[3664]: E1212 17:27:22.901739 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:27:26.899559 kubelet[3664]: E1212 17:27:26.899511 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:27:29.898846 kubelet[3664]: E1212 17:27:29.898749 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:27:30.899244 kubelet[3664]: E1212 17:27:30.899201 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:27:32.899923 kubelet[3664]: E1212 17:27:32.899837 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:27:33.905252 containerd[2108]: time="2025-12-12T17:27:33.905028708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:34.159481 containerd[2108]: time="2025-12-12T17:27:34.156054645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:34.164621 containerd[2108]: time="2025-12-12T17:27:34.163797740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:34.164838 containerd[2108]: time="2025-12-12T17:27:34.164607222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:34.164968 kubelet[3664]: E1212 17:27:34.164927 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:34.165297 kubelet[3664]: E1212 17:27:34.164976 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:34.165297 kubelet[3664]: E1212 17:27:34.165160 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw8jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7d8d9766-d9xlz_calico-apiserver(e108b102-1b18-4781-b323-af4f0e442eb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:34.165590 containerd[2108]: time="2025-12-12T17:27:34.165567522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:27:34.166979 kubelet[3664]: E1212 17:27:34.166922 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:27:34.442336 containerd[2108]: time="2025-12-12T17:27:34.442197485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:34.446186 containerd[2108]: time="2025-12-12T17:27:34.446142699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:27:34.446458 containerd[2108]: time="2025-12-12T17:27:34.446165531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:34.446896 kubelet[3664]: E1212 17:27:34.446383 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:34.446896 kubelet[3664]: E1212 17:27:34.446558 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:34.446896 kubelet[3664]: E1212 17:27:34.446656 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f4c1e637181e476989824504d7e7830e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-clnww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74445c9dcb-4pqcz_calico-system(202224df-4814-4cd7-bd50-d9bc16a19fb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:34.449185 containerd[2108]: time="2025-12-12T17:27:34.449160140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:27:34.758190 containerd[2108]: time="2025-12-12T17:27:34.757844149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:34.762680 containerd[2108]: time="2025-12-12T17:27:34.762626932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:27:34.762917 containerd[2108]: time="2025-12-12T17:27:34.762794136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:34.763050 kubelet[3664]: E1212 17:27:34.762997 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:34.763050 kubelet[3664]: E1212 17:27:34.763041 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:34.763167 kubelet[3664]: E1212 17:27:34.763131 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clnww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74445c9dcb-4pqcz_calico-system(202224df-4814-4cd7-bd50-d9bc16a19fb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:34.765483 kubelet[3664]: E1212 17:27:34.765427 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:27:39.900263 containerd[2108]: time="2025-12-12T17:27:39.900176494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:27:40.172658 containerd[2108]: time="2025-12-12T17:27:40.172401825Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:40.176703 containerd[2108]: time="2025-12-12T17:27:40.176592084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:27:40.176703 containerd[2108]: time="2025-12-12T17:27:40.176669198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:40.177214 kubelet[3664]: E1212 17:27:40.176984 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:40.177214 kubelet[3664]: E1212 17:27:40.177027 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:40.177214 kubelet[3664]: E1212 17:27:40.177127 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7l28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cb6ff9884-zb77j_calico-system(3862f9ed-932b-48f5-bc48-007596c724c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:40.178431 kubelet[3664]: E1212 17:27:40.178396 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:27:42.899429 containerd[2108]: time="2025-12-12T17:27:42.899222353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:43.164106 containerd[2108]: time="2025-12-12T17:27:43.163923699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:43.167687 containerd[2108]: time="2025-12-12T17:27:43.167638092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:43.167802 containerd[2108]: time="2025-12-12T17:27:43.167732598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:43.167958 kubelet[3664]: E1212 17:27:43.167893 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:43.167958 kubelet[3664]: E1212 17:27:43.167947 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:43.168759 kubelet[3664]: E1212 17:27:43.168057 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7slkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7d8d9766-mzmnd_calico-apiserver(ea3fa3f4-48d1-49ff-b968-783d9802a6b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:43.169258 kubelet[3664]: E1212 17:27:43.169226 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:27:43.900087 containerd[2108]: time="2025-12-12T17:27:43.899840438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:27:44.177238 containerd[2108]: time="2025-12-12T17:27:44.176976812Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:44.180589 containerd[2108]: time="2025-12-12T17:27:44.180509473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:27:44.180589 containerd[2108]: time="2025-12-12T17:27:44.180548602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:44.181221 kubelet[3664]: E1212 17:27:44.180757 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:44.181221 kubelet[3664]: E1212 17:27:44.180796 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:44.181221 kubelet[3664]: E1212 17:27:44.180924 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rhps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8dz6b_calico-system(444315f1-eff9-48ed-a7dc-c9b319819cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:44.182839 kubelet[3664]: E1212 17:27:44.182798 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:27:47.901883 containerd[2108]: time="2025-12-12T17:27:47.901836366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:27:47.903356 kubelet[3664]: E1212 17:27:47.903323 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:27:48.173332 containerd[2108]: time="2025-12-12T17:27:48.173068291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:48.176787 containerd[2108]: time="2025-12-12T17:27:48.176660089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:27:48.176787 containerd[2108]: time="2025-12-12T17:27:48.176749363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:48.178525 kubelet[3664]: E1212 17:27:48.177972 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:48.178525 kubelet[3664]: E1212 17:27:48.178021 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:48.178525 kubelet[3664]: E1212 17:27:48.178114 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tt2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:48.180202 containerd[2108]: time="2025-12-12T17:27:48.180171245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:27:48.442073 containerd[2108]: time="2025-12-12T17:27:48.441935085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:48.445550 containerd[2108]: time="2025-12-12T17:27:48.445503987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:27:48.445654 containerd[2108]: time="2025-12-12T17:27:48.445592540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:48.445810 kubelet[3664]: E1212 17:27:48.445768 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:48.445898 kubelet[3664]: E1212 17:27:48.445826 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:48.446270 kubelet[3664]: E1212 17:27:48.445946 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tt2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:48.447138 kubelet[3664]: E1212 17:27:48.447086 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:27:49.902055 kubelet[3664]: E1212 17:27:49.902007 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:27:54.899473 kubelet[3664]: E1212 17:27:54.899255 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:27:55.899595 kubelet[3664]: E1212 17:27:55.899547 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:27:56.898244 kubelet[3664]: E1212 17:27:56.898197 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:27:57.869990 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:36218.service - OpenSSH per-connection server daemon (10.200.16.10:36218). Dec 12 17:27:57.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.11:22-10.200.16.10:36218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:57.873583 kernel: kauditd_printk_skb: 136 callbacks suppressed Dec 12 17:27:57.873670 kernel: audit: type=1130 audit(1765560477.869:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.11:22-10.200.16.10:36218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:58.310000 audit[5835]: USER_ACCT pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.313087 sshd[5835]: Accepted publickey for core from 10.200.16.10 port 36218 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:27:58.328000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.332095 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:27:58.351305 kernel: audit: type=1101 audit(1765560478.310:775): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.351438 kernel: audit: type=1103 audit(1765560478.328:776): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.363770 kernel: audit: type=1006 audit(1765560478.328:777): pid=5835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 17:27:58.328000 audit[5835]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1fa6ac0 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.387633 kernel: audit: type=1300 audit(1765560478.328:777): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1fa6ac0 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:58.369837 systemd-logind[2072]: New session 10 of user core. Dec 12 17:27:58.328000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:58.395546 kernel: audit: type=1327 audit(1765560478.328:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:27:58.389454 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:27:58.394000 audit[5835]: USER_START pid=5835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.417217 kernel: audit: type=1105 audit(1765560478.394:778): pid=5835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.415000 audit[5838]: CRED_ACQ pid=5838 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.433452 kernel: audit: type=1103 audit(1765560478.415:779): pid=5838 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.631437 sshd[5838]: Connection closed by 10.200.16.10 port 36218 Dec 12 17:27:58.632082 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Dec 12 17:27:58.632000 audit[5835]: USER_END pid=5835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.637574 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:36218.service: Deactivated successfully. Dec 12 17:27:58.640730 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:27:58.642839 systemd-logind[2072]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:27:58.645361 systemd-logind[2072]: Removed session 10. Dec 12 17:27:58.633000 audit[5835]: CRED_DISP pid=5835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.669872 kernel: audit: type=1106 audit(1765560478.632:780): pid=5835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.670016 kernel: audit: type=1104 audit(1765560478.633:781): pid=5835 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:27:58.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.11:22-10.200.16.10:36218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:27:59.899244 kubelet[3664]: E1212 17:27:59.899110 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:28:00.900996 kubelet[3664]: E1212 17:28:00.900942 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:28:00.902459 kubelet[3664]: E1212 17:28:00.902026 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:28:03.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.11:22-10.200.16.10:35464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:03.718122 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:35464.service - OpenSSH per-connection server daemon (10.200.16.10:35464). Dec 12 17:28:03.721764 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:03.721940 kernel: audit: type=1130 audit(1765560483.716:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.11:22-10.200.16.10:35464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:04.132000 audit[5851]: USER_ACCT pid=5851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.151504 sshd[5851]: Accepted publickey for core from 10.200.16.10 port 35464 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:04.167433 kernel: audit: type=1101 audit(1765560484.132:784): pid=5851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.167484 kernel: audit: type=1103 audit(1765560484.149:785): pid=5851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.149000 audit[5851]: CRED_ACQ pid=5851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.152091 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:04.162784 systemd-logind[2072]: New session 11 of user core. Dec 12 17:28:04.177275 kernel: audit: type=1006 audit(1765560484.149:786): pid=5851 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 12 17:28:04.149000 audit[5851]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc018780 a2=3 a3=0 items=0 ppid=1 pid=5851 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:04.194520 kernel: audit: type=1300 audit(1765560484.149:786): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc018780 a2=3 a3=0 items=0 ppid=1 pid=5851 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:04.194686 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:28:04.149000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:04.202988 kernel: audit: type=1327 audit(1765560484.149:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:04.202000 audit[5851]: USER_START pid=5851 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.224893 kernel: audit: type=1105 audit(1765560484.202:787): pid=5851 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.205000 audit[5854]: CRED_ACQ pid=5854 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.248878 kernel: audit: type=1103 audit(1765560484.205:788): pid=5854 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.432885 sshd[5854]: Connection closed by 10.200.16.10 port 35464 Dec 12 17:28:04.434277 sshd-session[5851]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:04.434000 audit[5851]: USER_END pid=5851 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.439365 systemd-logind[2072]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:28:04.441252 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:35464.service: Deactivated successfully. Dec 12 17:28:04.446447 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:28:04.451177 systemd-logind[2072]: Removed session 11. Dec 12 17:28:04.434000 audit[5851]: CRED_DISP pid=5851 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.470998 kernel: audit: type=1106 audit(1765560484.434:789): pid=5851 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.471121 kernel: audit: type=1104 audit(1765560484.434:790): pid=5851 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:04.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.11:22-10.200.16.10:35464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:07.901598 kubelet[3664]: E1212 17:28:07.900026 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:28:08.899043 kubelet[3664]: E1212 17:28:08.898997 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:28:09.535612 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:35476.service - OpenSSH per-connection server daemon (10.200.16.10:35476). Dec 12 17:28:09.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.11:22-10.200.16.10:35476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:09.539624 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:09.539675 kernel: audit: type=1130 audit(1765560489.535:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.11:22-10.200.16.10:35476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:09.899611 kubelet[3664]: E1212 17:28:09.899069 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:28:09.974000 audit[5869]: USER_ACCT pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:09.975158 sshd[5869]: Accepted publickey for core from 10.200.16.10 port 35476 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:09.992883 kernel: audit: type=1101 audit(1765560489.974:793): pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:09.992000 audit[5869]: CRED_ACQ pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:09.993694 sshd-session[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:10.013765 systemd-logind[2072]: New session 12 of user core. Dec 12 17:28:10.019025 kernel: audit: type=1103 audit(1765560489.992:794): pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.019106 kernel: audit: type=1006 audit(1765560489.992:795): pid=5869 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 12 17:28:09.992000 audit[5869]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc03fce20 a2=3 a3=0 items=0 ppid=1 pid=5869 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.036870 kernel: audit: type=1300 audit(1765560489.992:795): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc03fce20 a2=3 a3=0 items=0 ppid=1 pid=5869 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:09.992000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:10.044464 kernel: audit: type=1327 audit(1765560489.992:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:10.046108 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:28:10.050000 audit[5869]: USER_START pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.052000 audit[5872]: CRED_ACQ pid=5872 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.091094 kernel: audit: type=1105 audit(1765560490.050:796): pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.091216 kernel: audit: type=1103 audit(1765560490.052:797): pid=5872 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.300282 sshd[5872]: Connection closed by 10.200.16.10 port 35476 Dec 12 17:28:10.300900 sshd-session[5869]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:10.301000 audit[5869]: USER_END pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.305826 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:35476.service: Deactivated successfully. Dec 12 17:28:10.309388 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:28:10.311359 systemd-logind[2072]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:28:10.312688 systemd-logind[2072]: Removed session 12. Dec 12 17:28:10.301000 audit[5869]: CRED_DISP pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.339827 kernel: audit: type=1106 audit(1765560490.301:798): pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.339962 kernel: audit: type=1104 audit(1765560490.301:799): pid=5869 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.11:22-10.200.16.10:35476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:10.390135 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:37456.service - OpenSSH per-connection server daemon (10.200.16.10:37456). Dec 12 17:28:10.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.11:22-10.200.16.10:37456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:10.811000 audit[5885]: USER_ACCT pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.813113 sshd[5885]: Accepted publickey for core from 10.200.16.10 port 37456 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:10.813000 audit[5885]: CRED_ACQ pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.813000 audit[5885]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff53ca4d0 a2=3 a3=0 items=0 ppid=1 pid=5885 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:10.813000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:10.814243 sshd-session[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:10.819314 systemd-logind[2072]: New session 13 of user core. Dec 12 17:28:10.823995 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:28:10.826000 audit[5885]: USER_START pid=5885 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:10.827000 audit[5889]: CRED_ACQ pid=5889 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:11.125923 sshd[5889]: Connection closed by 10.200.16.10 port 37456 Dec 12 17:28:11.126801 sshd-session[5885]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:11.127000 audit[5885]: USER_END pid=5885 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:11.127000 audit[5885]: CRED_DISP pid=5885 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:11.131243 systemd-logind[2072]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:28:11.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.11:22-10.200.16.10:37456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:11.131276 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:37456.service: Deactivated successfully. Dec 12 17:28:11.133745 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:28:11.135902 systemd-logind[2072]: Removed session 13. Dec 12 17:28:11.217992 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:37464.service - OpenSSH per-connection server daemon (10.200.16.10:37464). Dec 12 17:28:11.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.11:22-10.200.16.10:37464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:11.638000 audit[5899]: USER_ACCT pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:11.639195 sshd[5899]: Accepted publickey for core from 10.200.16.10 port 37464 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:11.639000 audit[5899]: CRED_ACQ pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:11.639000 audit[5899]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3a07580 a2=3 a3=0 items=0 ppid=1 pid=5899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:11.639000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:11.640351 sshd-session[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:11.644529 systemd-logind[2072]: New session 14 of user core. Dec 12 17:28:11.652016 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:28:11.653000 audit[5899]: USER_START pid=5899 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:11.655000 audit[5902]: CRED_ACQ pid=5902 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:11.960307 sshd[5902]: Connection closed by 10.200.16.10 port 37464 Dec 12 17:28:11.959510 sshd-session[5899]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:11.960000 audit[5899]: USER_END pid=5899 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:11.960000 audit[5899]: CRED_DISP pid=5899 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:11.963946 systemd-logind[2072]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:28:11.964828 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:37464.service: Deactivated successfully. Dec 12 17:28:11.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.11:22-10.200.16.10:37464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:11.970529 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:28:11.976773 systemd-logind[2072]: Removed session 14. Dec 12 17:28:12.900192 kubelet[3664]: E1212 17:28:12.900138 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:28:13.900122 kubelet[3664]: E1212 17:28:13.900058 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:28:15.899620 kubelet[3664]: E1212 17:28:15.899565 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:28:17.043317 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:37478.service - OpenSSH per-connection server daemon (10.200.16.10:37478). Dec 12 17:28:17.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.11:22-10.200.16.10:37478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:17.047455 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 17:28:17.047504 kernel: audit: type=1130 audit(1765560497.042:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.11:22-10.200.16.10:37478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:17.463000 audit[5932]: USER_ACCT pid=5932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.481761 sshd[5932]: Accepted publickey for core from 10.200.16.10 port 37478 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:17.482676 sshd-session[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:17.481000 audit[5932]: CRED_ACQ pid=5932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.499236 kernel: audit: type=1101 audit(1765560497.463:820): pid=5932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.499312 kernel: audit: type=1103 audit(1765560497.481:821): pid=5932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.510024 kernel: audit: type=1006 audit(1765560497.481:822): pid=5932 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 17:28:17.481000 audit[5932]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5aed2d0 a2=3 a3=0 items=0 ppid=1 pid=5932 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.527925 kernel: audit: type=1300 audit(1765560497.481:822): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5aed2d0 a2=3 a3=0 items=0 ppid=1 pid=5932 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:17.481000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:17.531302 systemd-logind[2072]: New session 15 of user core. Dec 12 17:28:17.535142 kernel: audit: type=1327 audit(1765560497.481:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:17.544052 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:28:17.546000 audit[5932]: USER_START pid=5932 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.566000 audit[5935]: CRED_ACQ pid=5935 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.582743 kernel: audit: type=1105 audit(1765560497.546:823): pid=5932 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.582839 kernel: audit: type=1103 audit(1765560497.566:824): pid=5935 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.762940 sshd[5935]: Connection closed by 10.200.16.10 port 37478 Dec 12 17:28:17.766071 sshd-session[5932]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:17.766000 audit[5932]: USER_END pid=5932 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.787036 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:37478.service: Deactivated successfully. Dec 12 17:28:17.788649 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:28:17.768000 audit[5932]: CRED_DISP pid=5932 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.793658 systemd-logind[2072]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:28:17.795904 systemd-logind[2072]: Removed session 15. Dec 12 17:28:17.804344 kernel: audit: type=1106 audit(1765560497.766:825): pid=5932 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.804420 kernel: audit: type=1104 audit(1765560497.768:826): pid=5932 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:17.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.11:22-10.200.16.10:37478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:20.898595 kubelet[3664]: E1212 17:28:20.898546 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:28:20.899427 containerd[2108]: time="2025-12-12T17:28:20.899167324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:21.197970 containerd[2108]: time="2025-12-12T17:28:21.197690130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:21.201735 containerd[2108]: time="2025-12-12T17:28:21.201630904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:21.201735 containerd[2108]: time="2025-12-12T17:28:21.201682921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:21.201952 kubelet[3664]: E1212 17:28:21.201905 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:21.202008 kubelet[3664]: E1212 17:28:21.201967 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:21.202421 kubelet[3664]: E1212 17:28:21.202091 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7l28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cb6ff9884-zb77j_calico-system(3862f9ed-932b-48f5-bc48-007596c724c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:21.204438 kubelet[3664]: E1212 17:28:21.203669 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:28:21.901186 kubelet[3664]: E1212 17:28:21.901144 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:28:22.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.11:22-10.200.16.10:57082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:22.858725 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:57082.service - OpenSSH per-connection server daemon (10.200.16.10:57082). Dec 12 17:28:22.862110 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:22.862308 kernel: audit: type=1130 audit(1765560502.857:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.11:22-10.200.16.10:57082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:23.297000 audit[5972]: USER_ACCT pid=5972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.308388 sshd[5972]: Accepted publickey for core from 10.200.16.10 port 57082 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:23.315302 sshd-session[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:23.313000 audit[5972]: CRED_ACQ pid=5972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.330935 kernel: audit: type=1101 audit(1765560503.297:829): pid=5972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.331014 kernel: audit: type=1103 audit(1765560503.313:830): pid=5972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.341802 kernel: audit: type=1006 audit(1765560503.313:831): pid=5972 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 17:28:23.341510 systemd-logind[2072]: New session 16 of user core. Dec 12 17:28:23.313000 audit[5972]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa3cca80 a2=3 a3=0 items=0 ppid=1 pid=5972 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:23.359561 kernel: audit: type=1300 audit(1765560503.313:831): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa3cca80 a2=3 a3=0 items=0 ppid=1 pid=5972 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:23.313000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:23.361092 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:28:23.367031 kernel: audit: type=1327 audit(1765560503.313:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:23.368000 audit[5972]: USER_START pid=5972 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.368000 audit[5975]: CRED_ACQ pid=5975 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.401600 kernel: audit: type=1105 audit(1765560503.368:832): pid=5972 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.401697 kernel: audit: type=1103 audit(1765560503.368:833): pid=5975 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.609433 sshd[5975]: Connection closed by 10.200.16.10 port 57082 Dec 12 17:28:23.609991 sshd-session[5972]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:23.611000 audit[5972]: USER_END pid=5972 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.614335 systemd-logind[2072]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:28:23.616251 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:57082.service: Deactivated successfully. Dec 12 17:28:23.619457 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:28:23.622281 systemd-logind[2072]: Removed session 16. Dec 12 17:28:23.611000 audit[5972]: CRED_DISP pid=5972 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.652231 kernel: audit: type=1106 audit(1765560503.611:834): pid=5972 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.652307 kernel: audit: type=1104 audit(1765560503.611:835): pid=5972 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:23.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.11:22-10.200.16.10:57082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:23.899138 containerd[2108]: time="2025-12-12T17:28:23.899000443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:24.144840 containerd[2108]: time="2025-12-12T17:28:24.144785967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:24.151195 containerd[2108]: time="2025-12-12T17:28:24.150971629Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:24.151195 containerd[2108]: time="2025-12-12T17:28:24.151052711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:24.151275 kubelet[3664]: E1212 17:28:24.151237 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:24.151543 kubelet[3664]: E1212 17:28:24.151284 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:24.151543 kubelet[3664]: E1212 17:28:24.151390 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw8jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7d8d9766-d9xlz_calico-apiserver(e108b102-1b18-4781-b323-af4f0e442eb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:24.153549 kubelet[3664]: E1212 17:28:24.152907 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:28:26.900320 kubelet[3664]: E1212 17:28:26.900260 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:28:28.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.11:22-10.200.16.10:57090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:28.700732 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:57090.service - OpenSSH per-connection server daemon (10.200.16.10:57090). Dec 12 17:28:28.704426 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:28.704511 kernel: audit: type=1130 audit(1765560508.699:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.11:22-10.200.16.10:57090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:28.899762 containerd[2108]: time="2025-12-12T17:28:28.899729486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:28:29.132000 audit[6001]: USER_ACCT pid=6001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.133371 sshd[6001]: Accepted publickey for core from 10.200.16.10 port 57090 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:29.149881 kernel: audit: type=1101 audit(1765560509.132:838): pid=6001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.150458 sshd-session[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:29.149000 audit[6001]: CRED_ACQ pid=6001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.176829 kernel: audit: type=1103 audit(1765560509.149:839): pid=6001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.176922 kernel: audit: type=1006 audit(1765560509.149:840): pid=6001 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 17:28:29.177769 containerd[2108]: time="2025-12-12T17:28:29.177717629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:29.149000 audit[6001]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea167730 a2=3 a3=0 items=0 ppid=1 pid=6001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:29.195413 kernel: audit: type=1300 audit(1765560509.149:840): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea167730 a2=3 a3=0 items=0 ppid=1 pid=6001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:29.149000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:29.198243 systemd-logind[2072]: New session 17 of user core. Dec 12 17:28:29.202161 kernel: audit: type=1327 audit(1765560509.149:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:29.202524 containerd[2108]: time="2025-12-12T17:28:29.202377627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:28:29.202733 containerd[2108]: time="2025-12-12T17:28:29.202490093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:29.203042 kubelet[3664]: E1212 17:28:29.202989 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:29.204261 kubelet[3664]: E1212 17:28:29.203300 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:29.204356 kubelet[3664]: E1212 17:28:29.203430 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f4c1e637181e476989824504d7e7830e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-clnww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74445c9dcb-4pqcz_calico-system(202224df-4814-4cd7-bd50-d9bc16a19fb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:29.207478 containerd[2108]: time="2025-12-12T17:28:29.207084329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:28:29.207144 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:28:29.212000 audit[6001]: USER_START pid=6001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.231000 audit[6004]: CRED_ACQ pid=6004 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.247672 kernel: audit: type=1105 audit(1765560509.212:841): pid=6001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.247806 kernel: audit: type=1103 audit(1765560509.231:842): pid=6004 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.443126 sshd[6004]: Connection closed by 10.200.16.10 port 57090 Dec 12 17:28:29.446370 sshd-session[6001]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:29.448000 audit[6001]: USER_END pid=6001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.451904 systemd-logind[2072]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:28:29.452826 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:57090.service: Deactivated successfully. Dec 12 17:28:29.457689 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:28:29.463539 systemd-logind[2072]: Removed session 17. Dec 12 17:28:29.448000 audit[6001]: CRED_DISP pid=6001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.494289 kernel: audit: type=1106 audit(1765560509.448:843): pid=6001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.494473 kernel: audit: type=1104 audit(1765560509.448:844): pid=6001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:29.494720 containerd[2108]: time="2025-12-12T17:28:29.494660710Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:29.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.11:22-10.200.16.10:57090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:29.498872 containerd[2108]: time="2025-12-12T17:28:29.498783344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:28:29.498943 containerd[2108]: time="2025-12-12T17:28:29.498843753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:29.499123 kubelet[3664]: E1212 17:28:29.499080 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:29.499243 kubelet[3664]: E1212 17:28:29.499134 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:29.499281 kubelet[3664]: E1212 17:28:29.499227 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clnww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74445c9dcb-4pqcz_calico-system(202224df-4814-4cd7-bd50-d9bc16a19fb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:29.500523 kubelet[3664]: E1212 17:28:29.500492 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:28:32.898342 containerd[2108]: time="2025-12-12T17:28:32.898301938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:28:33.192287 containerd[2108]: time="2025-12-12T17:28:33.192141289Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:33.195560 containerd[2108]: time="2025-12-12T17:28:33.195521111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:28:33.195749 containerd[2108]: time="2025-12-12T17:28:33.195581504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:33.195807 kubelet[3664]: E1212 17:28:33.195747 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:33.195807 kubelet[3664]: E1212 17:28:33.195799 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:33.196116 kubelet[3664]: E1212 17:28:33.195926 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rhps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8dz6b_calico-system(444315f1-eff9-48ed-a7dc-c9b319819cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:33.197348 kubelet[3664]: E1212 17:28:33.197299 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:28:33.898954 containerd[2108]: time="2025-12-12T17:28:33.898759153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:34.238950 containerd[2108]: time="2025-12-12T17:28:34.238007932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:34.242972 containerd[2108]: time="2025-12-12T17:28:34.242926514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:34.243143 containerd[2108]: time="2025-12-12T17:28:34.243018716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:34.243328 kubelet[3664]: E1212 17:28:34.243260 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:34.243328 kubelet[3664]: E1212 17:28:34.243313 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:34.243813 kubelet[3664]: E1212 17:28:34.243754 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7slkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b7d8d9766-mzmnd_calico-apiserver(ea3fa3f4-48d1-49ff-b968-783d9802a6b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:34.244958 kubelet[3664]: E1212 17:28:34.244922 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:28:34.536150 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:33342.service - OpenSSH per-connection server daemon (10.200.16.10:33342). Dec 12 17:28:34.540388 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:34.540425 kernel: audit: type=1130 audit(1765560514.535:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.11:22-10.200.16.10:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:34.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.11:22-10.200.16.10:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:34.981000 audit[6023]: USER_ACCT pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:34.999566 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:35.000104 sshd[6023]: Accepted publickey for core from 10.200.16.10 port 33342 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:34.998000 audit[6023]: CRED_ACQ pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.019835 kernel: audit: type=1101 audit(1765560514.981:847): pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.019923 kernel: audit: type=1103 audit(1765560514.998:848): pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.019847 systemd-logind[2072]: New session 18 of user core. Dec 12 17:28:35.030566 kernel: audit: type=1006 audit(1765560514.998:849): pid=6023 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 12 17:28:34.998000 audit[6023]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef431380 a2=3 a3=0 items=0 ppid=1 pid=6023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:35.049660 kernel: audit: type=1300 audit(1765560514.998:849): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef431380 a2=3 a3=0 items=0 ppid=1 pid=6023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:34.998000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:35.050894 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:28:35.057727 kernel: audit: type=1327 audit(1765560514.998:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:35.060000 audit[6023]: USER_START pid=6023 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.082000 audit[6026]: CRED_ACQ pid=6026 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.099747 kernel: audit: type=1105 audit(1765560515.060:850): pid=6023 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.100263 kernel: audit: type=1103 audit(1765560515.082:851): pid=6026 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.305643 sshd[6026]: Connection closed by 10.200.16.10 port 33342 Dec 12 17:28:35.304000 audit[6023]: USER_END pid=6023 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.304528 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:35.313023 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:33342.service: Deactivated successfully. Dec 12 17:28:35.316514 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:28:35.309000 audit[6023]: CRED_DISP pid=6023 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.343045 kernel: audit: type=1106 audit(1765560515.304:852): pid=6023 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.343141 kernel: audit: type=1104 audit(1765560515.309:853): pid=6023 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.11:22-10.200.16.10:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:35.344288 systemd-logind[2072]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:28:35.345148 systemd-logind[2072]: Removed session 18. Dec 12 17:28:35.387271 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:33356.service - OpenSSH per-connection server daemon (10.200.16.10:33356). Dec 12 17:28:35.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.11:22-10.200.16.10:33356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:35.776000 audit[6038]: USER_ACCT pid=6038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.779895 sshd[6038]: Accepted publickey for core from 10.200.16.10 port 33356 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:35.780146 sshd-session[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:35.778000 audit[6038]: CRED_ACQ pid=6038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.778000 audit[6038]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5c2e320 a2=3 a3=0 items=0 ppid=1 pid=6038 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:35.778000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:35.786026 systemd-logind[2072]: New session 19 of user core. Dec 12 17:28:35.791416 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:28:35.794000 audit[6038]: USER_START pid=6038 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.795000 audit[6043]: CRED_ACQ pid=6043 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:35.899593 kubelet[3664]: E1212 17:28:35.899310 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:28:36.172681 sshd[6043]: Connection closed by 10.200.16.10 port 33356 Dec 12 17:28:36.174552 sshd-session[6038]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:36.174000 audit[6038]: USER_END pid=6038 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:36.174000 audit[6038]: CRED_DISP pid=6038 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:36.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.11:22-10.200.16.10:33356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:36.179328 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:33356.service: Deactivated successfully. Dec 12 17:28:36.182716 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:28:36.185910 systemd-logind[2072]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:28:36.186947 systemd-logind[2072]: Removed session 19. Dec 12 17:28:36.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.11:22-10.200.16.10:33358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:36.255111 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:33358.service - OpenSSH per-connection server daemon (10.200.16.10:33358). Dec 12 17:28:36.647000 audit[6053]: USER_ACCT pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:36.648899 sshd[6053]: Accepted publickey for core from 10.200.16.10 port 33358 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:36.648000 audit[6053]: CRED_ACQ pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:36.648000 audit[6053]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc157f930 a2=3 a3=0 items=0 ppid=1 pid=6053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:36.648000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:36.650143 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:36.654150 systemd-logind[2072]: New session 20 of user core. Dec 12 17:28:36.659018 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:28:36.662000 audit[6053]: USER_START pid=6053 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:36.663000 audit[6056]: CRED_ACQ pid=6056 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:36.899286 kubelet[3664]: E1212 17:28:36.898963 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:28:37.239000 audit[6074]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:37.239000 audit[6074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe1566360 a2=0 a3=1 items=0 ppid=3812 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:37.239000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:37.246000 audit[6074]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:37.246000 audit[6074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe1566360 a2=0 a3=1 items=0 ppid=3812 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:37.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:37.268000 audit[6076]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=6076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:37.268000 audit[6076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe184ac90 a2=0 a3=1 items=0 ppid=3812 pid=6076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:37.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:37.275000 audit[6076]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:37.275000 audit[6076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe184ac90 a2=0 a3=1 items=0 ppid=3812 pid=6076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:37.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:37.316121 sshd[6056]: Connection closed by 10.200.16.10 port 33358 Dec 12 17:28:37.317754 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:37.319000 audit[6053]: USER_END pid=6053 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:37.320000 audit[6053]: CRED_DISP pid=6053 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:37.323494 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:33358.service: Deactivated successfully. Dec 12 17:28:37.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.11:22-10.200.16.10:33358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:37.326425 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:28:37.329157 systemd-logind[2072]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:28:37.331548 systemd-logind[2072]: Removed session 20. Dec 12 17:28:37.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.11:22-10.200.16.10:33368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:37.406166 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:33368.service - OpenSSH per-connection server daemon (10.200.16.10:33368). Dec 12 17:28:37.837000 audit[6081]: USER_ACCT pid=6081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:37.839215 sshd[6081]: Accepted publickey for core from 10.200.16.10 port 33368 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:37.838000 audit[6081]: CRED_ACQ pid=6081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:37.838000 audit[6081]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1d9dd60 a2=3 a3=0 items=0 ppid=1 pid=6081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:37.838000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:37.840052 sshd-session[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:37.844565 systemd-logind[2072]: New session 21 of user core. Dec 12 17:28:37.851110 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:28:37.852000 audit[6081]: USER_START pid=6081 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:37.853000 audit[6085]: CRED_ACQ pid=6085 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:38.232071 sshd[6085]: Connection closed by 10.200.16.10 port 33368 Dec 12 17:28:38.262324 sshd-session[6081]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:38.263000 audit[6081]: USER_END pid=6081 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:38.263000 audit[6081]: CRED_DISP pid=6081 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:38.268307 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:33368.service: Deactivated successfully. Dec 12 17:28:38.268825 systemd-logind[2072]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:28:38.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.11:22-10.200.16.10:33368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:38.272645 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:28:38.274622 systemd-logind[2072]: Removed session 21. Dec 12 17:28:38.324142 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:33376.service - OpenSSH per-connection server daemon (10.200.16.10:33376). Dec 12 17:28:38.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.11:22-10.200.16.10:33376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:38.743000 audit[6095]: USER_ACCT pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:38.744998 sshd[6095]: Accepted publickey for core from 10.200.16.10 port 33376 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:38.745000 audit[6095]: CRED_ACQ pid=6095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:38.745000 audit[6095]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe384c9a0 a2=3 a3=0 items=0 ppid=1 pid=6095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:38.745000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:38.746507 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:38.750886 systemd-logind[2072]: New session 22 of user core. Dec 12 17:28:38.756036 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:28:38.759000 audit[6095]: USER_START pid=6095 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:38.762000 audit[6098]: CRED_ACQ pid=6098 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:39.030487 sshd[6098]: Connection closed by 10.200.16.10 port 33376 Dec 12 17:28:39.031950 sshd-session[6095]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:39.032000 audit[6095]: USER_END pid=6095 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:39.032000 audit[6095]: CRED_DISP pid=6095 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:39.036034 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:33376.service: Deactivated successfully. Dec 12 17:28:39.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.11:22-10.200.16.10:33376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:39.038424 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:28:39.041483 systemd-logind[2072]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:28:39.043408 systemd-logind[2072]: Removed session 22. Dec 12 17:28:40.900765 kubelet[3664]: E1212 17:28:40.900708 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:28:41.404884 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 17:28:41.405035 kernel: audit: type=1325 audit(1765560521.394:895): table=filter:149 family=2 entries=26 op=nft_register_rule pid=6111 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:41.394000 audit[6111]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=6111 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:41.394000 audit[6111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffec662630 a2=0 a3=1 items=0 ppid=3812 pid=6111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:41.427906 kernel: audit: type=1300 audit(1765560521.394:895): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffec662630 a2=0 a3=1 items=0 ppid=3812 pid=6111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:41.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:41.438501 kernel: audit: type=1327 audit(1765560521.394:895): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:41.442000 audit[6111]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=6111 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:41.442000 audit[6111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffec662630 a2=0 a3=1 items=0 ppid=3812 pid=6111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:41.480725 kernel: audit: type=1325 audit(1765560521.442:896): table=nat:150 family=2 entries=104 op=nft_register_chain pid=6111 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:28:41.480864 kernel: audit: type=1300 audit(1765560521.442:896): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffec662630 a2=0 a3=1 items=0 ppid=3812 pid=6111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:41.442000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:41.494812 kernel: audit: type=1327 audit(1765560521.442:896): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:28:41.900496 containerd[2108]: time="2025-12-12T17:28:41.900255665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:42.152732 containerd[2108]: time="2025-12-12T17:28:42.152579364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:42.156730 containerd[2108]: time="2025-12-12T17:28:42.156678170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:42.156817 containerd[2108]: time="2025-12-12T17:28:42.156791108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:42.157023 kubelet[3664]: E1212 17:28:42.156970 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:42.157371 kubelet[3664]: E1212 17:28:42.157032 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:42.157371 kubelet[3664]: E1212 17:28:42.157134 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tt2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:42.159897 containerd[2108]: time="2025-12-12T17:28:42.159868868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:42.433831 containerd[2108]: time="2025-12-12T17:28:42.433320607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:42.438988 containerd[2108]: time="2025-12-12T17:28:42.438843089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:42.438988 containerd[2108]: time="2025-12-12T17:28:42.438876778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:42.439114 kubelet[3664]: E1212 17:28:42.439081 3664 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:42.439149 kubelet[3664]: E1212 17:28:42.439126 3664 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:42.439269 kubelet[3664]: E1212 17:28:42.439218 3664 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tt2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-trqfx_calico-system(b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:42.440417 kubelet[3664]: E1212 17:28:42.440373 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:28:44.114058 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:50238.service - OpenSSH per-connection server daemon (10.200.16.10:50238). Dec 12 17:28:44.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.11:22-10.200.16.10:50238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:44.135879 kernel: audit: type=1130 audit(1765560524.113:897): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.11:22-10.200.16.10:50238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:44.521000 audit[6113]: USER_ACCT pid=6113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:44.539567 sshd[6113]: Accepted publickey for core from 10.200.16.10 port 50238 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:44.539363 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:44.537000 audit[6113]: CRED_ACQ pid=6113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:44.548990 kernel: audit: type=1101 audit(1765560524.521:898): pid=6113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:44.566413 systemd-logind[2072]: New session 23 of user core. Dec 12 17:28:44.576619 kernel: audit: type=1103 audit(1765560524.537:899): pid=6113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:44.576706 kernel: audit: type=1006 audit(1765560524.537:900): pid=6113 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 17:28:44.537000 audit[6113]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdeea9440 a2=3 a3=0 items=0 ppid=1 pid=6113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:44.537000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:44.581104 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:28:44.584000 audit[6113]: USER_START pid=6113 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:44.586000 audit[6116]: CRED_ACQ pid=6116 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:44.781820 sshd[6116]: Connection closed by 10.200.16.10 port 50238 Dec 12 17:28:44.783061 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:44.783000 audit[6113]: USER_END pid=6113 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:44.783000 audit[6113]: CRED_DISP pid=6113 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:44.786731 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:50238.service: Deactivated successfully. Dec 12 17:28:44.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.11:22-10.200.16.10:50238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:44.788646 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:28:44.789719 systemd-logind[2072]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:28:44.791772 systemd-logind[2072]: Removed session 23. Dec 12 17:28:46.899850 kubelet[3664]: E1212 17:28:46.899490 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:28:46.901581 kubelet[3664]: E1212 17:28:46.901533 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:28:49.881891 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:28:49.882020 kernel: audit: type=1130 audit(1765560529.875:906): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.11:22-10.200.16.10:50240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:49.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.11:22-10.200.16.10:50240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:49.877244 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:50240.service - OpenSSH per-connection server daemon (10.200.16.10:50240). Dec 12 17:28:49.899320 kubelet[3664]: E1212 17:28:49.898955 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:28:50.323000 audit[6130]: USER_ACCT pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.341545 sshd[6130]: Accepted publickey for core from 10.200.16.10 port 50240 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:50.341000 audit[6130]: CRED_ACQ pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.343231 sshd-session[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:50.358207 kernel: audit: type=1101 audit(1765560530.323:907): pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.358300 kernel: audit: type=1103 audit(1765560530.341:908): pid=6130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.369040 kernel: audit: type=1006 audit(1765560530.341:909): pid=6130 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 17:28:50.341000 audit[6130]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa48b320 a2=3 a3=0 items=0 ppid=1 pid=6130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:50.386462 kernel: audit: type=1300 audit(1765560530.341:909): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa48b320 a2=3 a3=0 items=0 ppid=1 pid=6130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:50.341000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:50.393270 kernel: audit: type=1327 audit(1765560530.341:909): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:50.395629 systemd-logind[2072]: New session 24 of user core. Dec 12 17:28:50.402031 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:28:50.404000 audit[6130]: USER_START pid=6130 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.407000 audit[6133]: CRED_ACQ pid=6133 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.443565 kernel: audit: type=1105 audit(1765560530.404:910): pid=6130 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.443648 kernel: audit: type=1103 audit(1765560530.407:911): pid=6133 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.617533 sshd[6133]: Connection closed by 10.200.16.10 port 50240 Dec 12 17:28:50.618116 sshd-session[6130]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:50.617000 audit[6130]: USER_END pid=6130 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.617000 audit[6130]: CRED_DISP pid=6130 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.639515 systemd-logind[2072]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:28:50.639804 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:28:50.639926 kernel: audit: type=1106 audit(1765560530.617:912): pid=6130 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:50.641912 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:50240.service: Deactivated successfully. Dec 12 17:28:50.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.11:22-10.200.16.10:50240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:50.655445 systemd-logind[2072]: Removed session 24. Dec 12 17:28:50.655942 kernel: audit: type=1104 audit(1765560530.617:913): pid=6130 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:51.900901 kubelet[3664]: E1212 17:28:51.899624 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:28:52.899958 kubelet[3664]: E1212 17:28:52.899888 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:28:55.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.11:22-10.200.16.10:48588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:55.709126 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:48588.service - OpenSSH per-connection server daemon (10.200.16.10:48588). Dec 12 17:28:55.712459 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:55.712552 kernel: audit: type=1130 audit(1765560535.708:915): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.11:22-10.200.16.10:48588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:55.900831 kubelet[3664]: E1212 17:28:55.900769 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:28:56.146000 audit[6170]: USER_ACCT pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.148068 sshd[6170]: Accepted publickey for core from 10.200.16.10 port 48588 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:28:56.165986 kernel: audit: type=1101 audit(1765560536.146:916): pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.165000 audit[6170]: CRED_ACQ pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.167192 sshd-session[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:56.187707 systemd-logind[2072]: New session 25 of user core. Dec 12 17:28:56.194882 kernel: audit: type=1103 audit(1765560536.165:917): pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.194987 kernel: audit: type=1006 audit(1765560536.165:918): pid=6170 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 12 17:28:56.165000 audit[6170]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb1b97c0 a2=3 a3=0 items=0 ppid=1 pid=6170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:56.212184 kernel: audit: type=1300 audit(1765560536.165:918): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb1b97c0 a2=3 a3=0 items=0 ppid=1 pid=6170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:56.165000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:56.218828 kernel: audit: type=1327 audit(1765560536.165:918): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:56.219298 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:28:56.221000 audit[6170]: USER_START pid=6170 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.223000 audit[6173]: CRED_ACQ pid=6173 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.287094 kernel: audit: type=1105 audit(1765560536.221:919): pid=6170 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.287251 kernel: audit: type=1103 audit(1765560536.223:920): pid=6173 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.456608 sshd[6173]: Connection closed by 10.200.16.10 port 48588 Dec 12 17:28:56.456908 sshd-session[6170]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:56.458000 audit[6170]: USER_END pid=6170 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.462669 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:48588.service: Deactivated successfully. Dec 12 17:28:56.465587 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:28:56.502645 systemd-logind[2072]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:28:56.458000 audit[6170]: CRED_DISP pid=6170 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.504789 systemd-logind[2072]: Removed session 25. Dec 12 17:28:56.517238 kernel: audit: type=1106 audit(1765560536.458:921): pid=6170 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.517333 kernel: audit: type=1104 audit(1765560536.458:922): pid=6170 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:28:56.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.11:22-10.200.16.10:48588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:58.899614 kubelet[3664]: E1212 17:28:58.899549 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:29:01.549115 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:58580.service - OpenSSH per-connection server daemon (10.200.16.10:58580). Dec 12 17:29:01.553467 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:01.553505 kernel: audit: type=1130 audit(1765560541.548:924): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.11:22-10.200.16.10:58580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:01.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.11:22-10.200.16.10:58580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:01.899579 kubelet[3664]: E1212 17:29:01.899265 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:29:01.990000 audit[6184]: USER_ACCT pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.006759 sshd[6184]: Accepted publickey for core from 10.200.16.10 port 58580 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:29:02.007964 sshd-session[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:02.006000 audit[6184]: CRED_ACQ pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.023131 kernel: audit: type=1101 audit(1765560541.990:925): pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.023235 kernel: audit: type=1103 audit(1765560542.006:926): pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.032629 systemd-logind[2072]: New session 26 of user core. Dec 12 17:29:02.037983 kernel: audit: type=1006 audit(1765560542.006:927): pid=6184 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 12 17:29:02.038928 kernel: audit: type=1300 audit(1765560542.006:927): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9ea16a0 a2=3 a3=0 items=0 ppid=1 pid=6184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:02.006000 audit[6184]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9ea16a0 a2=3 a3=0 items=0 ppid=1 pid=6184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:02.006000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:02.057154 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 17:29:02.061718 kernel: audit: type=1327 audit(1765560542.006:927): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:02.062000 audit[6184]: USER_START pid=6184 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.064000 audit[6187]: CRED_ACQ pid=6187 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.098622 kernel: audit: type=1105 audit(1765560542.062:928): pid=6184 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.098735 kernel: audit: type=1103 audit(1765560542.064:929): pid=6187 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.301412 sshd[6187]: Connection closed by 10.200.16.10 port 58580 Dec 12 17:29:02.302810 sshd-session[6184]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:02.303000 audit[6184]: USER_END pid=6184 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.308699 systemd-logind[2072]: Session 26 logged out. Waiting for processes to exit. Dec 12 17:29:02.310197 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:58580.service: Deactivated successfully. Dec 12 17:29:02.314087 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 17:29:02.318183 systemd-logind[2072]: Removed session 26. Dec 12 17:29:02.304000 audit[6184]: CRED_DISP pid=6184 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.337963 kernel: audit: type=1106 audit(1765560542.303:930): pid=6184 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.338075 kernel: audit: type=1104 audit(1765560542.304:931): pid=6184 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:02.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.11:22-10.200.16.10:58580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:02.899375 kubelet[3664]: E1212 17:29:02.899100 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-mzmnd" podUID="ea3fa3f4-48d1-49ff-b968-783d9802a6b3" Dec 12 17:29:03.898908 kubelet[3664]: E1212 17:29:03.898838 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b7d8d9766-d9xlz" podUID="e108b102-1b18-4781-b323-af4f0e442eb0" Dec 12 17:29:05.900291 kubelet[3664]: E1212 17:29:05.900140 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trqfx" podUID="b9c95eef-5d4b-4e75-a0b3-a9dd678f3f4b" Dec 12 17:29:07.403445 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:58588.service - OpenSSH per-connection server daemon (10.200.16.10:58588). Dec 12 17:29:07.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.11:22-10.200.16.10:58588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:07.408883 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:07.408978 kernel: audit: type=1130 audit(1765560547.402:933): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.11:22-10.200.16.10:58588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:07.848000 audit[6198]: USER_ACCT pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:07.855014 sshd[6198]: Accepted publickey for core from 10.200.16.10 port 58588 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:29:07.855838 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:07.854000 audit[6198]: CRED_ACQ pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:07.891425 kernel: audit: type=1101 audit(1765560547.848:934): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:07.891567 kernel: audit: type=1103 audit(1765560547.854:935): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:07.900951 kernel: audit: type=1006 audit(1765560547.854:936): pid=6198 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 12 17:29:07.854000 audit[6198]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe86e4850 a2=3 a3=0 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:07.918292 kernel: audit: type=1300 audit(1765560547.854:936): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe86e4850 a2=3 a3=0 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:07.854000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:07.921328 systemd-logind[2072]: New session 27 of user core. Dec 12 17:29:07.927427 kernel: audit: type=1327 audit(1765560547.854:936): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:07.928078 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 12 17:29:07.931000 audit[6198]: USER_START pid=6198 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:07.953899 kernel: audit: type=1105 audit(1765560547.931:937): pid=6198 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:07.953000 audit[6203]: CRED_ACQ pid=6203 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:07.970881 kernel: audit: type=1103 audit(1765560547.953:938): pid=6203 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:08.162651 sshd[6203]: Connection closed by 10.200.16.10 port 58588 Dec 12 17:29:08.163317 sshd-session[6198]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:08.163000 audit[6198]: USER_END pid=6198 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:08.163000 audit[6198]: CRED_DISP pid=6198 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:08.185067 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:58588.service: Deactivated successfully. Dec 12 17:29:08.185114 systemd-logind[2072]: Session 27 logged out. Waiting for processes to exit. Dec 12 17:29:08.188846 systemd[1]: session-27.scope: Deactivated successfully. Dec 12 17:29:08.193365 systemd-logind[2072]: Removed session 27. Dec 12 17:29:08.198974 kernel: audit: type=1106 audit(1765560548.163:939): pid=6198 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:08.199071 kernel: audit: type=1104 audit(1765560548.163:940): pid=6198 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:08.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.11:22-10.200.16.10:58588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:09.901293 kubelet[3664]: E1212 17:29:09.901250 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74445c9dcb-4pqcz" podUID="202224df-4814-4cd7-bd50-d9bc16a19fb7" Dec 12 17:29:12.899899 kubelet[3664]: E1212 17:29:12.899067 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ff9884-zb77j" podUID="3862f9ed-932b-48f5-bc48-007596c724c2" Dec 12 17:29:13.255670 systemd[1]: Started sshd@25-10.200.20.11:22-10.200.16.10:34340.service - OpenSSH per-connection server daemon (10.200.16.10:34340). Dec 12 17:29:13.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.11:22-10.200.16.10:34340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:13.259657 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:13.259743 kernel: audit: type=1130 audit(1765560553.255:942): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.11:22-10.200.16.10:34340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:13.701000 audit[6215]: USER_ACCT pid=6215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:13.702942 sshd[6215]: Accepted publickey for core from 10.200.16.10 port 34340 ssh2: RSA SHA256:x+iqxkdRxG6IDwZa98SgfUXewf8OB4qY0HpOCSf3mws Dec 12 17:29:13.718911 sshd-session[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:13.717000 audit[6215]: CRED_ACQ pid=6215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:13.737951 kernel: audit: type=1101 audit(1765560553.701:943): pid=6215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:13.738046 kernel: audit: type=1103 audit(1765560553.717:944): pid=6215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:13.749245 kernel: audit: type=1006 audit(1765560553.717:945): pid=6215 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 12 17:29:13.717000 audit[6215]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc6506a0 a2=3 a3=0 items=0 ppid=1 pid=6215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:13.766315 kernel: audit: type=1300 audit(1765560553.717:945): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc6506a0 a2=3 a3=0 items=0 ppid=1 pid=6215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:13.717000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:13.774938 kernel: audit: type=1327 audit(1765560553.717:945): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:13.776157 systemd-logind[2072]: New session 28 of user core. Dec 12 17:29:13.781039 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 12 17:29:13.785000 audit[6215]: USER_START pid=6215 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:13.806000 audit[6218]: CRED_ACQ pid=6218 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:13.826334 kernel: audit: type=1105 audit(1765560553.785:946): pid=6215 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:13.826454 kernel: audit: type=1103 audit(1765560553.806:947): pid=6218 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:13.899150 kubelet[3664]: E1212 17:29:13.899016 3664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8dz6b" podUID="444315f1-eff9-48ed-a7dc-c9b319819cb8" Dec 12 17:29:14.016402 sshd[6218]: Connection closed by 10.200.16.10 port 34340 Dec 12 17:29:14.017197 sshd-session[6215]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:14.018000 audit[6215]: USER_END pid=6215 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:14.030056 systemd[1]: sshd@25-10.200.20.11:22-10.200.16.10:34340.service: Deactivated successfully. Dec 12 17:29:14.031822 systemd[1]: session-28.scope: Deactivated successfully. Dec 12 17:29:14.032209 systemd-logind[2072]: Session 28 logged out. Waiting for processes to exit. Dec 12 17:29:14.036274 systemd-logind[2072]: Removed session 28. Dec 12 17:29:14.026000 audit[6215]: CRED_DISP pid=6215 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:14.060925 kernel: audit: type=1106 audit(1765560554.018:948): pid=6215 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:14.061037 kernel: audit: type=1104 audit(1765560554.026:949): pid=6215 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 12 17:29:14.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.11:22-10.200.16.10:34340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'