Dec 16 12:13:22.506877 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:13:22.506896 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 12:13:22.506904 kernel: KASLR enabled Dec 16 12:13:22.506908 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:13:22.506913 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:13:22.506917 kernel: efi: EFI v2.7 by EDK II Dec 16 12:13:22.506922 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3ed698 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:13:22.506926 kernel: random: crng init done Dec 16 12:13:22.506931 kernel: secureboot: Secure boot disabled Dec 16 12:13:22.506935 kernel: ACPI: Early table checksum verification disabled Dec 16 12:13:22.506939 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:13:22.506944 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:22.506948 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:22.506953 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:13:22.506959 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:22.506963 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:22.506968 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:22.506973 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:22.506978 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:22.506983 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:22.506987 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:13:22.506992 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:13:22.506996 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:13:22.507000 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:13:22.507005 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:13:22.507009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:13:22.507014 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:13:22.507019 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:13:22.507024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:13:22.507028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:13:22.507033 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:13:22.507037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:13:22.507041 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:13:22.507046 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:13:22.507050 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:13:22.507054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:13:22.507059 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:13:22.507063 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:13:22.507069 kernel: Zone ranges: Dec 16 12:13:22.507073 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:13:22.507080 kernel: DMA32 empty Dec 16 12:13:22.507084 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:13:22.507089 kernel: Device empty Dec 16 12:13:22.507095 kernel: Movable zone start for each node Dec 16 12:13:22.507099 kernel: Early memory node ranges Dec 16 12:13:22.507104 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:13:22.507109 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:13:22.507113 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:13:22.507118 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:13:22.507123 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:13:22.507127 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:13:22.507132 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:13:22.507138 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:13:22.507142 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:13:22.507147 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:13:22.507152 kernel: psci: probing for conduit method from ACPI. Dec 16 12:13:22.507156 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:13:22.507161 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:13:22.507165 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:13:22.507170 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:13:22.507175 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:13:22.507179 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:13:22.507184 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:13:22.507188 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:13:22.507194 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:13:22.507199 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:13:22.507204 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:13:22.507208 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:13:22.507213 kernel: CPU features: detected: Spectre-v4 Dec 16 12:13:22.507218 kernel: CPU features: detected: Spectre-BHB Dec 16 12:13:22.507222 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:13:22.507227 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:13:22.507232 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:13:22.507236 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:13:22.507242 kernel: alternatives: applying boot alternatives Dec 16 12:13:22.507248 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:13:22.507252 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:13:22.507257 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:13:22.507262 kernel: Fallback order for Node 0: 0 Dec 16 12:13:22.507267 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:13:22.507271 kernel: Policy zone: Normal Dec 16 12:13:22.507276 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:13:22.507280 kernel: software IO TLB: area num 2. Dec 16 12:13:22.507285 kernel: software IO TLB: mapped [mem 0x0000000037370000-0x000000003b370000] (64MB) Dec 16 12:13:22.507290 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:13:22.507295 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:13:22.507301 kernel: rcu: RCU event tracing is enabled. Dec 16 12:13:22.507306 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:13:22.507310 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:13:22.507315 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:13:22.507320 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:13:22.507325 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:13:22.507329 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:13:22.507334 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:13:22.507339 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:13:22.507343 kernel: GICv3: 960 SPIs implemented Dec 16 12:13:22.507349 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:13:22.507353 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:13:22.507358 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:13:22.507363 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:13:22.507367 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:13:22.507372 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:13:22.507377 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:13:22.507381 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:13:22.507386 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:13:22.507391 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:13:22.507396 kernel: Console: colour dummy device 80x25 Dec 16 12:13:22.507402 kernel: printk: legacy console [tty1] enabled Dec 16 12:13:22.507407 kernel: ACPI: Core revision 20240827 Dec 16 12:13:22.507412 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:13:22.507417 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:13:22.507422 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:13:22.507426 kernel: landlock: Up and running. Dec 16 12:13:22.507431 kernel: SELinux: Initializing. Dec 16 12:13:22.507437 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:13:22.507442 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:13:22.507447 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:13:22.507452 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:13:22.507461 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:13:22.507466 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:13:22.507472 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:13:22.507494 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:13:22.507500 kernel: Remapping and enabling EFI services. Dec 16 12:13:22.507506 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:13:22.507511 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:13:22.507516 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:13:22.507522 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:13:22.507528 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:13:22.507533 kernel: SMP: Total of 2 processors activated. Dec 16 12:13:22.507538 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:13:22.507543 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:13:22.507548 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:13:22.507553 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:13:22.507558 kernel: CPU features: detected: Common not Private translations Dec 16 12:13:22.507564 kernel: CPU features: detected: CRC32 instructions Dec 16 12:13:22.507570 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:13:22.507575 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:13:22.507580 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:13:22.507585 kernel: CPU features: detected: Privileged Access Never Dec 16 12:13:22.507590 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:13:22.507595 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:13:22.507602 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:13:22.507607 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:13:22.507612 kernel: alternatives: applying system-wide alternatives Dec 16 12:13:22.507617 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:13:22.507622 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:13:22.507627 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:13:22.507633 kernel: Memory: 3979900K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 193072K reserved, 16384K cma-reserved) Dec 16 12:13:22.507639 kernel: devtmpfs: initialized Dec 16 12:13:22.507644 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:13:22.507649 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:13:22.507654 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:13:22.507659 kernel: 0 pages in range for non-PLT usage Dec 16 12:13:22.507665 kernel: 515168 pages in range for PLT usage Dec 16 12:13:22.507670 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:13:22.507676 kernel: SMBIOS 3.1.0 present. Dec 16 12:13:22.507681 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:13:22.507686 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:13:22.507691 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:13:22.507697 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:13:22.507702 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:13:22.507707 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:13:22.507713 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:13:22.507719 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 16 12:13:22.507724 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:13:22.507729 kernel: cpuidle: using governor menu Dec 16 12:13:22.507734 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:13:22.507739 kernel: ASID allocator initialised with 32768 entries Dec 16 12:13:22.507744 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:13:22.507749 kernel: Serial: AMBA PL011 UART driver Dec 16 12:13:22.507755 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:13:22.507760 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:13:22.507765 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:13:22.507770 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:13:22.507776 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:13:22.507781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:13:22.507786 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:13:22.507792 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:13:22.507797 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:13:22.507802 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:13:22.507807 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:13:22.507812 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:13:22.507817 kernel: ACPI: Interpreter enabled Dec 16 12:13:22.507822 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:13:22.507829 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:13:22.507834 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:13:22.507839 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:13:22.507844 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:13:22.507849 kernel: ACPI: CPU0 has been hot-added Dec 16 12:13:22.507854 kernel: ACPI: CPU1 has been hot-added Dec 16 12:13:22.507859 kernel: iommu: Default domain type: Translated Dec 16 12:13:22.507865 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:13:22.507870 kernel: efivars: Registered efivars operations Dec 16 12:13:22.507875 kernel: vgaarb: loaded Dec 16 12:13:22.507881 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:13:22.507886 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:13:22.507891 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:13:22.507896 kernel: pnp: PnP ACPI init Dec 16 12:13:22.507902 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:13:22.507907 kernel: NET: Registered PF_INET protocol family Dec 16 12:13:22.507912 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:13:22.507917 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:13:22.507923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:13:22.507928 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:13:22.507933 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:13:22.507939 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:13:22.507944 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:13:22.507949 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:13:22.507955 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:13:22.507960 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:13:22.507965 kernel: kvm [1]: HYP mode not available Dec 16 12:13:22.507970 kernel: Initialise system trusted keyrings Dec 16 12:13:22.507976 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:13:22.507981 kernel: Key type asymmetric registered Dec 16 12:13:22.507986 kernel: Asymmetric key parser 'x509' registered Dec 16 12:13:22.507991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:13:22.507996 kernel: io scheduler mq-deadline registered Dec 16 12:13:22.508001 kernel: io scheduler kyber registered Dec 16 12:13:22.508006 kernel: io scheduler bfq registered Dec 16 12:13:22.508011 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:13:22.508018 kernel: thunder_xcv, ver 1.0 Dec 16 12:13:22.508023 kernel: thunder_bgx, ver 1.0 Dec 16 12:13:22.508028 kernel: nicpf, ver 1.0 Dec 16 12:13:22.508033 kernel: nicvf, ver 1.0 Dec 16 12:13:22.508172 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:13:22.508241 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:13:19 UTC (1765887199) Dec 16 12:13:22.508250 kernel: efifb: probing for efifb Dec 16 12:13:22.508255 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:13:22.508261 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:13:22.508266 kernel: efifb: scrolling: redraw Dec 16 12:13:22.508271 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:13:22.508276 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:13:22.508281 kernel: fb0: EFI VGA frame buffer device Dec 16 12:13:22.508288 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:13:22.508293 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:13:22.508298 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:13:22.508303 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:13:22.508309 kernel: watchdog: NMI not fully supported Dec 16 12:13:22.508314 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:13:22.508319 kernel: Segment Routing with IPv6 Dec 16 12:13:22.508329 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:13:22.508334 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:13:22.508339 kernel: Key type dns_resolver registered Dec 16 12:13:22.508344 kernel: registered taskstats version 1 Dec 16 12:13:22.508350 kernel: Loading compiled-in X.509 certificates Dec 16 12:13:22.508355 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 12:13:22.508360 kernel: Demotion targets for Node 0: null Dec 16 12:13:22.508366 kernel: Key type .fscrypt registered Dec 16 12:13:22.508371 kernel: Key type fscrypt-provisioning registered Dec 16 12:13:22.508376 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:13:22.508382 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:13:22.508387 kernel: ima: No architecture policies found Dec 16 12:13:22.508392 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:13:22.508397 kernel: clk: Disabling unused clocks Dec 16 12:13:22.508402 kernel: PM: genpd: Disabling unused power domains Dec 16 12:13:22.508408 kernel: Freeing unused kernel memory: 12480K Dec 16 12:13:22.508413 kernel: Run /init as init process Dec 16 12:13:22.508419 kernel: with arguments: Dec 16 12:13:22.508424 kernel: /init Dec 16 12:13:22.508429 kernel: with environment: Dec 16 12:13:22.508434 kernel: HOME=/ Dec 16 12:13:22.508439 kernel: TERM=linux Dec 16 12:13:22.508445 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:13:22.508450 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:13:22.508455 kernel: SCSI subsystem initialized Dec 16 12:13:22.508461 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:13:22.510679 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:13:22.510703 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:13:22.510716 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:13:22.510721 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:13:22.510727 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:13:22.510732 kernel: PTP clock support registered Dec 16 12:13:22.510738 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:13:22.510743 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:13:22.510748 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:13:22.510755 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:13:22.510760 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:13:22.510766 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:13:22.510871 kernel: scsi host0: storvsc_host_t Dec 16 12:13:22.510954 kernel: scsi host1: storvsc_host_t Dec 16 12:13:22.511044 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:13:22.511129 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:13:22.511204 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:13:22.511277 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:13:22.511350 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 12:13:22.511422 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:13:22.511516 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:13:22.511604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:13:22.511673 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:13:22.511680 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:13:22.511752 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 12:13:22.511827 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 12:13:22.511835 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:13:22.511907 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:13:22.511913 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:13:22.511919 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:13:22.511924 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:13:22.511930 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:13:22.511935 kernel: raid6: neonx8 gen() 18500 MB/s Dec 16 12:13:22.511942 kernel: raid6: neonx4 gen() 18562 MB/s Dec 16 12:13:22.511947 kernel: raid6: neonx2 gen() 17068 MB/s Dec 16 12:13:22.511952 kernel: raid6: neonx1 gen() 15036 MB/s Dec 16 12:13:22.511958 kernel: raid6: int64x8 gen() 10551 MB/s Dec 16 12:13:22.511963 kernel: raid6: int64x4 gen() 10623 MB/s Dec 16 12:13:22.511968 kernel: raid6: int64x2 gen() 9002 MB/s Dec 16 12:13:22.511973 kernel: raid6: int64x1 gen() 7037 MB/s Dec 16 12:13:22.511979 kernel: raid6: using algorithm neonx4 gen() 18562 MB/s Dec 16 12:13:22.511985 kernel: raid6: .... xor() 15142 MB/s, rmw enabled Dec 16 12:13:22.511991 kernel: raid6: using neon recovery algorithm Dec 16 12:13:22.511996 kernel: xor: measuring software checksum speed Dec 16 12:13:22.512001 kernel: 8regs : 28595 MB/sec Dec 16 12:13:22.512007 kernel: 32regs : 28677 MB/sec Dec 16 12:13:22.512012 kernel: arm64_neon : 37330 MB/sec Dec 16 12:13:22.512017 kernel: xor: using function: arm64_neon (37330 MB/sec) Dec 16 12:13:22.512023 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:13:22.512029 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (468) Dec 16 12:13:22.512034 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 12:13:22.512040 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:13:22.512045 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:13:22.512051 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:13:22.512056 kernel: loop: module loaded Dec 16 12:13:22.512063 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 12:13:22.512068 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:13:22.512074 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:13:22.512082 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:13:22.512088 systemd[1]: Detected virtualization microsoft. Dec 16 12:13:22.512094 systemd[1]: Detected architecture arm64. Dec 16 12:13:22.512101 systemd[1]: Running in initrd. Dec 16 12:13:22.512106 systemd[1]: No hostname configured, using default hostname. Dec 16 12:13:22.512112 systemd[1]: Hostname set to . Dec 16 12:13:22.512118 systemd[1]: Initializing machine ID from random generator. Dec 16 12:13:22.512123 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:13:22.512129 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:13:22.512136 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:13:22.512142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:13:22.512149 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:13:22.512155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:13:22.512161 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:13:22.512167 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:13:22.512174 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:13:22.512179 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:13:22.512185 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:13:22.512191 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:13:22.512196 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:13:22.512202 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:13:22.512209 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:13:22.512215 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:13:22.512220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:13:22.512226 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:13:22.512232 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:13:22.512238 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:13:22.512244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:13:22.512255 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:13:22.512262 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:13:22.512268 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:13:22.512274 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:13:22.512280 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:13:22.512287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:13:22.512293 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:13:22.512299 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:13:22.512305 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:13:22.512311 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:13:22.512317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:13:22.512343 systemd-journald[606]: Collecting audit messages is enabled. Dec 16 12:13:22.512359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:22.512366 systemd-journald[606]: Journal started Dec 16 12:13:22.512380 systemd-journald[606]: Runtime Journal (/run/log/journal/f94feef6332342c5ab11cc53870ba340) is 8M, max 78.3M, 70.3M free. Dec 16 12:13:22.531575 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:13:22.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.534508 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:13:22.557498 kernel: audit: type=1130 audit(1765887202.531:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.557878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:13:22.589041 kernel: audit: type=1130 audit(1765887202.557:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.591521 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:13:22.613780 kernel: audit: type=1130 audit(1765887202.588:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.626047 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:13:22.644787 kernel: audit: type=1130 audit(1765887202.618:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.652741 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:13:22.784506 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:13:22.819752 systemd-modules-load[609]: Inserted module 'br_netfilter' Dec 16 12:13:22.827720 kernel: Bridge firewalling registered Dec 16 12:13:22.821604 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:13:22.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.833818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:13:22.867307 kernel: audit: type=1130 audit(1765887202.831:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:22.863019 systemd-tmpfiles[619]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:13:22.988590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:13:22.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.015493 kernel: audit: type=1130 audit(1765887202.994:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.015105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:23.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.026541 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:13:23.045737 kernel: audit: type=1130 audit(1765887203.019:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.078041 kernel: audit: type=1130 audit(1765887203.052:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.072558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:13:23.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.097510 kernel: audit: type=1130 audit(1765887203.082:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.097673 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:13:23.111000 audit: BPF prog-id=6 op=LOAD Dec 16 12:13:23.112449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:13:23.124691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:13:23.145587 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:13:23.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.165754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:13:23.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.230640 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:13:23.231696 systemd-resolved[631]: Positive Trust Anchors: Dec 16 12:13:23.231704 systemd-resolved[631]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:13:23.231707 systemd-resolved[631]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:13:23.276150 dracut-cmdline[646]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:13:23.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.231726 systemd-resolved[631]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:13:23.269221 systemd-resolved[631]: Defaulting to hostname 'linux'. Dec 16 12:13:23.270063 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:13:23.283980 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:13:23.412500 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:13:23.451508 kernel: iscsi: registered transport (tcp) Dec 16 12:13:23.484391 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:13:23.484411 kernel: QLogic iSCSI HBA Driver Dec 16 12:13:23.532597 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:13:23.553569 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:13:23.569665 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 16 12:13:23.569687 kernel: audit: type=1130 audit(1765887203.559:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.561005 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:13:23.631549 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:13:23.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.656683 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:13:23.662790 kernel: audit: type=1130 audit(1765887203.637:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.672013 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:13:23.706974 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:13:23.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.735000 audit: BPF prog-id=7 op=LOAD Dec 16 12:13:23.738357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:13:23.757882 kernel: audit: type=1130 audit(1765887203.719:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.757908 kernel: audit: type=1334 audit(1765887203.735:18): prog-id=7 op=LOAD Dec 16 12:13:23.757916 kernel: audit: type=1334 audit(1765887203.737:19): prog-id=8 op=LOAD Dec 16 12:13:23.737000 audit: BPF prog-id=8 op=LOAD Dec 16 12:13:23.838063 systemd-udevd[881]: Using default interface naming scheme 'v257'. Dec 16 12:13:23.844902 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:13:23.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.853558 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:13:23.885544 kernel: audit: type=1130 audit(1765887203.851:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.891614 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:13:23.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.929762 dracut-pre-trigger[993]: rd.md=0: removing MD RAID activation Dec 16 12:13:23.928738 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:13:23.955126 kernel: audit: type=1130 audit(1765887203.907:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:23.955156 kernel: audit: type=1334 audit(1765887203.927:22): prog-id=9 op=LOAD Dec 16 12:13:23.927000 audit: BPF prog-id=9 op=LOAD Dec 16 12:13:23.974117 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:13:23.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:24.002503 kernel: audit: type=1130 audit(1765887203.979:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:24.002659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:13:24.010904 systemd-networkd[1004]: lo: Link UP Dec 16 12:13:24.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:24.010907 systemd-networkd[1004]: lo: Gained carrier Dec 16 12:13:24.046373 kernel: audit: type=1130 audit(1765887204.021:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:24.012015 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:13:24.022102 systemd[1]: Reached target network.target - Network. Dec 16 12:13:24.093587 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:13:24.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:24.106232 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:13:24.182195 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:13:24.187443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:24.197929 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:24.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:24.213464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:24.257510 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:13:24.268819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:24.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:24.294506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:13:24.348566 kernel: hv_netvsc 000d3afb-252c-000d-3afb-252c000d3afb eth0: VF slot 1 added Dec 16 12:13:24.373493 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:13:24.389632 kernel: hv_pci fd949178-f8ca-4a5c-8807-b8823e2947cb: PCI VMBus probing: Using version 0x10004 Dec 16 12:13:24.389911 kernel: hv_pci fd949178-f8ca-4a5c-8807-b8823e2947cb: PCI host bridge to bus f8ca:00 Dec 16 12:13:24.399822 kernel: pci_bus f8ca:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:13:24.405775 kernel: pci_bus f8ca:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:13:24.440078 kernel: pci f8ca:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:13:24.439917 systemd-networkd[1004]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:13:24.439920 systemd-networkd[1004]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:13:24.470842 kernel: pci f8ca:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:13:24.470950 kernel: pci f8ca:00:02.0: enabling Extended Tags Dec 16 12:13:24.452896 systemd-networkd[1004]: eth0: Link UP Dec 16 12:13:24.453023 systemd-networkd[1004]: eth0: Gained carrier Dec 16 12:13:24.453037 systemd-networkd[1004]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:13:24.505604 kernel: pci f8ca:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f8ca:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:13:24.508529 systemd-networkd[1004]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:13:24.530917 kernel: pci_bus f8ca:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:13:24.531117 kernel: pci f8ca:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:13:24.718897 kernel: mlx5_core f8ca:00:02.0: enabling device (0000 -> 0002) Dec 16 12:13:24.773516 kernel: mlx5_core f8ca:00:02.0: PTM is not supported by PCIe Dec 16 12:13:24.773797 kernel: mlx5_core f8ca:00:02.0: firmware version: 16.30.5006 Dec 16 12:13:24.840778 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:13:24.853709 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:13:24.898892 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:13:24.950162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:13:24.972105 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:13:25.048512 kernel: hv_netvsc 000d3afb-252c-000d-3afb-252c000d3afb eth0: VF registering: eth1 Dec 16 12:13:25.048842 kernel: mlx5_core f8ca:00:02.0 eth1: joined to eth0 Dec 16 12:13:25.063730 kernel: mlx5_core f8ca:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:13:25.081923 systemd-networkd[1004]: eth1: Interface name change detected, renamed to enP63690s1. Dec 16 12:13:25.089911 kernel: mlx5_core f8ca:00:02.0 enP63690s1: renamed from eth1 Dec 16 12:13:25.157361 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:13:25.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:25.163347 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:13:25.175517 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:13:25.188438 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:13:25.201258 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:13:25.229028 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:13:25.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:25.250493 kernel: mlx5_core f8ca:00:02.0 enP63690s1: Link up Dec 16 12:13:25.291054 systemd-networkd[1004]: enP63690s1: Link UP Dec 16 12:13:25.294591 kernel: hv_netvsc 000d3afb-252c-000d-3afb-252c000d3afb eth0: Data path switched to VF: enP63690s1 Dec 16 12:13:25.464829 systemd-networkd[1004]: enP63690s1: Gained carrier Dec 16 12:13:25.704874 systemd-networkd[1004]: eth0: Gained IPv6LL Dec 16 12:13:26.064610 disk-uuid[1109]: Warning: The kernel is still using the old partition table. Dec 16 12:13:26.064610 disk-uuid[1109]: The new table will be used at the next reboot or after you Dec 16 12:13:26.064610 disk-uuid[1109]: run partprobe(8) or kpartx(8) Dec 16 12:13:26.064610 disk-uuid[1109]: The operation has completed successfully. Dec 16 12:13:26.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:26.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:26.073673 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:13:26.073797 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:13:26.087780 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:13:26.153502 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1265) Dec 16 12:13:26.167931 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:26.167988 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:13:26.198907 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:13:26.198958 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:13:26.213382 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:13:26.233001 kernel: BTRFS info (device sda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:26.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:26.220559 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:13:27.078833 ignition[1284]: Ignition 2.24.0 Dec 16 12:13:27.078850 ignition[1284]: Stage: fetch-offline Dec 16 12:13:27.083625 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:13:27.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:27.080306 ignition[1284]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:27.099068 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:13:27.080329 ignition[1284]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:27.080532 ignition[1284]: parsed url from cmdline: "" Dec 16 12:13:27.080535 ignition[1284]: no config URL provided Dec 16 12:13:27.080539 ignition[1284]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:13:27.080549 ignition[1284]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:13:27.080552 ignition[1284]: failed to fetch config: resource requires networking Dec 16 12:13:27.080798 ignition[1284]: Ignition finished successfully Dec 16 12:13:27.134194 ignition[1292]: Ignition 2.24.0 Dec 16 12:13:27.134200 ignition[1292]: Stage: fetch Dec 16 12:13:27.134444 ignition[1292]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:27.134455 ignition[1292]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:27.134539 ignition[1292]: parsed url from cmdline: "" Dec 16 12:13:27.134543 ignition[1292]: no config URL provided Dec 16 12:13:27.134546 ignition[1292]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:13:27.134556 ignition[1292]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:13:27.134571 ignition[1292]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:13:27.251410 ignition[1292]: GET result: OK Dec 16 12:13:27.251494 ignition[1292]: config has been read from IMDS userdata Dec 16 12:13:27.251507 ignition[1292]: parsing config with SHA512: c96ecbc2b3e9946565ce047aed2545762a5e8241a7e186d1f97b45208e436b5d87240245290ea4eaf1ceb27287b8a257b9ea4be8dcea7eb9abcf32504de45c6e Dec 16 12:13:27.261826 unknown[1292]: fetched base config from "system" Dec 16 12:13:27.261835 unknown[1292]: fetched base config from "system" Dec 16 12:13:27.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:27.262759 ignition[1292]: fetch: fetch complete Dec 16 12:13:27.261839 unknown[1292]: fetched user config from "azure" Dec 16 12:13:27.262764 ignition[1292]: fetch: fetch passed Dec 16 12:13:27.266630 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:13:27.262821 ignition[1292]: Ignition finished successfully Dec 16 12:13:27.274013 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:13:27.313047 ignition[1298]: Ignition 2.24.0 Dec 16 12:13:27.313065 ignition[1298]: Stage: kargs Dec 16 12:13:27.317302 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:13:27.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:27.313288 ignition[1298]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:27.327441 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:13:27.313299 ignition[1298]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:27.313944 ignition[1298]: kargs: kargs passed Dec 16 12:13:27.313991 ignition[1298]: Ignition finished successfully Dec 16 12:13:27.362300 ignition[1304]: Ignition 2.24.0 Dec 16 12:13:27.362313 ignition[1304]: Stage: disks Dec 16 12:13:27.362546 ignition[1304]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:27.369525 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:13:27.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:27.362553 ignition[1304]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:27.379629 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:13:27.363230 ignition[1304]: disks: disks passed Dec 16 12:13:27.390898 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:13:27.363273 ignition[1304]: Ignition finished successfully Dec 16 12:13:27.403316 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:13:27.414269 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:13:27.423467 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:13:27.434943 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:13:27.549846 systemd-fsck[1312]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 12:13:27.559684 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:13:27.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:27.568539 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:13:27.851523 kernel: EXT4-fs (sda9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 12:13:27.852455 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:13:27.857359 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:13:27.898290 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:13:27.918310 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:13:27.928654 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:13:27.941840 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:13:27.941885 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:13:27.949144 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:13:27.972654 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:13:27.998495 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1326) Dec 16 12:13:28.010864 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:28.010926 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:13:28.023073 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:13:28.023144 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:13:28.024830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:13:28.896779 coreos-metadata[1328]: Dec 16 12:13:28.896 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:13:28.906609 coreos-metadata[1328]: Dec 16 12:13:28.905 INFO Fetch successful Dec 16 12:13:28.906609 coreos-metadata[1328]: Dec 16 12:13:28.905 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:13:28.924136 coreos-metadata[1328]: Dec 16 12:13:28.923 INFO Fetch successful Dec 16 12:13:28.938042 coreos-metadata[1328]: Dec 16 12:13:28.938 INFO wrote hostname ci-4547.0.0-a-4d45b340a5 to /sysroot/etc/hostname Dec 16 12:13:28.947814 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:13:28.963409 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 12:13:28.963433 kernel: audit: type=1130 audit(1765887208.953:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:28.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.070744 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:13:30.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.102512 kernel: audit: type=1130 audit(1765887210.077:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.102805 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:13:30.115243 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:13:30.137954 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:13:30.147686 kernel: BTRFS info (device sda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:30.171480 ignition[1429]: INFO : Ignition 2.24.0 Dec 16 12:13:30.171480 ignition[1429]: INFO : Stage: mount Dec 16 12:13:30.200576 kernel: audit: type=1130 audit(1765887210.181:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.173516 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:13:30.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.221258 ignition[1429]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:30.221258 ignition[1429]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:30.221258 ignition[1429]: INFO : mount: mount passed Dec 16 12:13:30.221258 ignition[1429]: INFO : Ignition finished successfully Dec 16 12:13:30.249790 kernel: audit: type=1130 audit(1765887210.203:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:30.182274 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:13:30.205452 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:13:30.258592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:13:30.298729 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1440) Dec 16 12:13:30.310989 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:13:30.311021 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:13:30.322292 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:13:30.322358 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:13:30.323884 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:13:30.356925 ignition[1457]: INFO : Ignition 2.24.0 Dec 16 12:13:30.356925 ignition[1457]: INFO : Stage: files Dec 16 12:13:30.365230 ignition[1457]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:30.365230 ignition[1457]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:30.365230 ignition[1457]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:13:30.384700 ignition[1457]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:13:30.384700 ignition[1457]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:13:30.470985 ignition[1457]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:13:30.480002 ignition[1457]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:13:30.480002 ignition[1457]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:13:30.479015 unknown[1457]: wrote ssh authorized keys file for user: core Dec 16 12:13:30.501619 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:13:30.501619 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:13:30.541493 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:13:30.618081 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:13:30.618081 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:13:30.638500 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:13:30.638500 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:13:30.638500 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:13:30.638500 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:13:30.638500 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:13:30.638500 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:13:30.638500 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:13:30.705531 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:13:30.705531 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:13:30.705531 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:13:30.705531 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:13:30.705531 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:13:30.705531 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 12:13:31.186219 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:13:31.366130 ignition[1457]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:13:31.366130 ignition[1457]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:13:31.384463 ignition[1457]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:13:31.397762 ignition[1457]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:13:31.397762 ignition[1457]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:13:31.397762 ignition[1457]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:13:31.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.441768 ignition[1457]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:13:31.441768 ignition[1457]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:13:31.441768 ignition[1457]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:13:31.441768 ignition[1457]: INFO : files: files passed Dec 16 12:13:31.441768 ignition[1457]: INFO : Ignition finished successfully Dec 16 12:13:31.489205 kernel: audit: type=1130 audit(1765887211.419:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.408381 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:13:31.421112 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:13:31.536590 kernel: audit: type=1130 audit(1765887211.499:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.536627 kernel: audit: type=1131 audit(1765887211.499:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.472760 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:13:31.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.483815 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:13:31.568323 kernel: audit: type=1130 audit(1765887211.538:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.568348 initrd-setup-root-after-ignition[1487]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:13:31.568348 initrd-setup-root-after-ignition[1487]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:13:31.483919 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:13:31.594687 initrd-setup-root-after-ignition[1491]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:13:31.506427 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:13:31.539791 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:13:31.568648 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:13:31.626443 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:13:31.628514 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:13:31.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.656444 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:13:31.661063 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:13:31.686807 kernel: audit: type=1130 audit(1765887211.636:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.686832 kernel: audit: type=1131 audit(1765887211.655:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.683652 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:13:31.687644 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:13:31.732604 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:13:31.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.744573 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:13:31.767226 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:13:31.767335 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:13:31.780047 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:13:31.791495 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:13:31.801589 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:13:31.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.801719 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:13:31.815567 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:13:31.821626 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:13:31.832232 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:13:31.843869 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:13:31.853755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:13:31.866048 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:13:31.877764 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:13:31.888222 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:13:31.898319 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:13:31.907542 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:13:31.917356 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:13:31.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.926045 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:13:31.926173 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:13:31.938978 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:13:31.944263 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:13:31.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.954325 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:13:31.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.958401 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:13:31.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.964259 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:13:32.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:31.964385 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:13:31.979232 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:13:31.979339 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:13:31.985549 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:13:31.985620 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:13:31.994360 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:13:31.994440 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:13:32.006444 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:13:32.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.070086 ignition[1512]: INFO : Ignition 2.24.0 Dec 16 12:13:32.070086 ignition[1512]: INFO : Stage: umount Dec 16 12:13:32.070086 ignition[1512]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:13:32.070086 ignition[1512]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:13:32.070086 ignition[1512]: INFO : umount: umount passed Dec 16 12:13:32.070086 ignition[1512]: INFO : Ignition finished successfully Dec 16 12:13:32.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.036706 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:13:32.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.052137 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:13:32.055638 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:13:32.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.066248 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:13:32.066356 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:13:32.075381 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:13:32.075469 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:13:32.087295 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:13:32.087402 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:13:32.099093 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:13:32.099325 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:13:32.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.110590 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:13:32.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.110661 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:13:32.120752 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:13:32.120815 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:13:32.128934 systemd[1]: Stopped target network.target - Network. Dec 16 12:13:32.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.137875 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:13:32.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.137945 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:13:32.148309 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:13:32.157142 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:13:32.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.161498 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:13:32.302000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:13:32.308000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:13:32.166996 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:13:32.176401 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:13:32.190842 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:13:32.190904 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:13:32.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.201843 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:13:32.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.201899 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:13:32.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.210157 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:13:32.210177 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:13:32.218812 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:13:32.218878 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:13:32.226819 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:13:32.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.226856 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:13:32.237304 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:13:32.249254 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:13:32.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.260102 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:13:32.260741 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:13:32.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.260825 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:13:32.462101 kernel: hv_netvsc 000d3afb-252c-000d-3afb-252c000d3afb eth0: Data path switched from VF: enP63690s1 Dec 16 12:13:32.270052 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:13:32.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.270151 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:13:32.289839 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:13:32.289932 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:13:32.304841 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:13:32.314568 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:13:32.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.314620 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:13:32.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.324037 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:13:32.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.338121 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:13:32.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.338222 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:13:32.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.347109 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:13:32.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.347176 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:13:32.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.356820 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:13:32.356914 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:13:32.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:32.365622 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:13:32.398754 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:13:32.398879 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:13:32.406090 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:13:32.406126 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:13:32.412007 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:13:32.412031 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:13:32.412265 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:13:32.412312 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:13:32.434648 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:13:32.434726 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:13:32.447740 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:13:32.447797 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:13:32.473965 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:13:32.492378 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:13:32.492496 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:13:32.511628 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:13:32.511700 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:13:32.523295 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:13:32.523366 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:13:32.534246 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:13:32.534307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:13:32.740890 systemd-journald[606]: Received SIGTERM from PID 1 (systemd). Dec 16 12:13:32.540290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:13:32.540337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:32.551951 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:13:32.552048 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:13:32.560225 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:13:32.560310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:13:32.571736 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:13:32.571816 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:13:32.582072 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:13:32.590559 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:13:32.590688 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:13:32.601788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:13:32.630466 systemd[1]: Switching root. Dec 16 12:13:32.801765 systemd-journald[606]: Journal stopped Dec 16 12:13:37.577586 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:13:37.577610 kernel: SELinux: policy capability open_perms=1 Dec 16 12:13:37.577619 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:13:37.577625 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:13:37.577632 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:13:37.577637 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:13:37.577646 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:13:37.577652 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:13:37.577657 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:13:37.577664 systemd[1]: Successfully loaded SELinux policy in 144.611ms. Dec 16 12:13:37.577673 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.694ms. Dec 16 12:13:37.577680 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:13:37.577686 systemd[1]: Detected virtualization microsoft. Dec 16 12:13:37.577693 systemd[1]: Detected architecture arm64. Dec 16 12:13:37.577700 systemd[1]: Detected first boot. Dec 16 12:13:37.577707 systemd[1]: Hostname set to . Dec 16 12:13:37.577714 systemd[1]: Initializing machine ID from random generator. Dec 16 12:13:37.577720 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 16 12:13:37.577726 kernel: audit: type=1334 audit(1765887214.495:90): prog-id=10 op=LOAD Dec 16 12:13:37.577733 kernel: audit: type=1334 audit(1765887214.495:91): prog-id=10 op=UNLOAD Dec 16 12:13:37.577739 kernel: audit: type=1334 audit(1765887214.500:92): prog-id=11 op=LOAD Dec 16 12:13:37.577745 kernel: audit: type=1334 audit(1765887214.500:93): prog-id=11 op=UNLOAD Dec 16 12:13:37.577751 zram_generator::config[1555]: No configuration found. Dec 16 12:13:37.577758 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:13:37.577764 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:13:37.577771 kernel: audit: type=1334 audit(1765887216.622:94): prog-id=12 op=LOAD Dec 16 12:13:37.577777 kernel: audit: type=1334 audit(1765887216.622:95): prog-id=3 op=UNLOAD Dec 16 12:13:37.577784 kernel: audit: type=1334 audit(1765887216.626:96): prog-id=13 op=LOAD Dec 16 12:13:37.577789 kernel: audit: type=1334 audit(1765887216.631:97): prog-id=14 op=LOAD Dec 16 12:13:37.577796 kernel: audit: type=1334 audit(1765887216.631:98): prog-id=4 op=UNLOAD Dec 16 12:13:37.577802 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:13:37.577809 kernel: audit: type=1334 audit(1765887216.631:99): prog-id=5 op=UNLOAD Dec 16 12:13:37.577815 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:13:37.577822 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:13:37.577829 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:13:37.577836 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:13:37.577843 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:13:37.577850 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:13:37.577857 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:13:37.577875 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:13:37.577884 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:13:37.577891 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:13:37.577898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:13:37.577904 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:13:37.577913 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:13:37.577920 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:13:37.577926 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:13:37.577933 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:13:37.577941 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:13:37.577947 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:13:37.577954 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:13:37.577961 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:13:37.577968 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:13:37.577975 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:13:37.577981 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:13:37.577988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:13:37.577994 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:13:37.578002 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:13:37.578009 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:13:37.578015 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:13:37.578022 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:13:37.578028 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:13:37.578036 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:13:37.578043 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:13:37.578049 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:13:37.578056 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:13:37.578063 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:13:37.578070 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:13:37.578076 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:13:37.578084 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:13:37.578090 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:13:37.578097 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:13:37.578103 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:13:37.578110 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:13:37.578117 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:13:37.578124 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:13:37.578131 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:13:37.578138 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:13:37.578145 systemd[1]: Reached target machines.target - Containers. Dec 16 12:13:37.578151 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:13:37.578159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:13:37.578166 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:13:37.578173 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:13:37.578179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:13:37.578186 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:13:37.578192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:13:37.578199 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:13:37.578207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:13:37.578214 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:13:37.578221 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:13:37.578228 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:13:37.578235 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:13:37.578241 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:13:37.578248 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:13:37.578256 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:13:37.578263 kernel: fuse: init (API version 7.41) Dec 16 12:13:37.578269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:13:37.578276 kernel: ACPI: bus type drm_connector registered Dec 16 12:13:37.578282 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:13:37.578289 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:13:37.578297 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:13:37.578322 systemd-journald[1643]: Collecting audit messages is enabled. Dec 16 12:13:37.578338 systemd-journald[1643]: Journal started Dec 16 12:13:37.578354 systemd-journald[1643]: Runtime Journal (/run/log/journal/1a99af4cfb614aa5973be9be7733f98b) is 8M, max 78.3M, 70.3M free. Dec 16 12:13:37.015000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:13:37.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.447000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:13:37.447000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:13:37.447000 audit: BPF prog-id=15 op=LOAD Dec 16 12:13:37.447000 audit: BPF prog-id=16 op=LOAD Dec 16 12:13:37.447000 audit: BPF prog-id=17 op=LOAD Dec 16 12:13:37.575000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:13:37.575000 audit[1643]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffec39af00 a2=4000 a3=0 items=0 ppid=1 pid=1643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:13:37.575000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:13:36.617320 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:13:36.632584 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:13:36.637921 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:13:36.639685 systemd[1]: systemd-journald.service: Consumed 2.942s CPU time. Dec 16 12:13:37.592520 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:13:37.606197 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:13:37.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.610138 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:13:37.615343 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:13:37.620839 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:13:37.625892 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:13:37.630885 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:13:37.636340 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:13:37.641535 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:13:37.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.647187 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:13:37.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.653822 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:13:37.653965 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:13:37.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.660075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:13:37.660213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:13:37.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.665336 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:13:37.665468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:13:37.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.670908 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:13:37.671029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:13:37.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.677009 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:13:37.677144 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:13:37.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.681972 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:13:37.682089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:13:37.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.687333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:13:37.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.693767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:13:37.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.700916 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:13:37.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.708682 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:13:37.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.715602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:13:37.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.731589 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:13:37.737854 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:13:37.744670 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:13:37.759609 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:13:37.765598 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:13:37.765635 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:13:37.771127 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:13:37.794326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:13:37.794465 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:13:37.795610 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:13:37.807121 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:13:37.812942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:13:37.813822 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:13:37.818384 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:13:37.819245 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:13:37.826001 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:13:37.832228 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:13:37.839126 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:13:37.845158 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:13:37.866405 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:13:37.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.872284 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:13:37.882057 systemd-journald[1643]: Time spent on flushing to /var/log/journal/1a99af4cfb614aa5973be9be7733f98b is 15.264ms for 1079 entries. Dec 16 12:13:37.882057 systemd-journald[1643]: System Journal (/var/log/journal/1a99af4cfb614aa5973be9be7733f98b) is 8M, max 2.2G, 2.2G free. Dec 16 12:13:37.937966 systemd-journald[1643]: Received client request to flush runtime journal. Dec 16 12:13:37.938027 kernel: loop1: detected capacity change from 0 to 45344 Dec 16 12:13:37.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.884654 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:13:37.899439 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:13:37.919028 systemd-tmpfiles[1697]: ACLs are not supported, ignoring. Dec 16 12:13:37.919036 systemd-tmpfiles[1697]: ACLs are not supported, ignoring. Dec 16 12:13:37.922163 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:13:37.929252 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:13:37.941701 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:13:37.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:37.965376 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:13:37.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.070675 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:13:38.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.076000 audit: BPF prog-id=18 op=LOAD Dec 16 12:13:38.076000 audit: BPF prog-id=19 op=LOAD Dec 16 12:13:38.076000 audit: BPF prog-id=20 op=LOAD Dec 16 12:13:38.077695 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:13:38.083000 audit: BPF prog-id=21 op=LOAD Dec 16 12:13:38.084355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:13:38.091648 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:13:38.101000 audit: BPF prog-id=22 op=LOAD Dec 16 12:13:38.101000 audit: BPF prog-id=23 op=LOAD Dec 16 12:13:38.101000 audit: BPF prog-id=24 op=LOAD Dec 16 12:13:38.104642 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:13:38.109677 systemd-tmpfiles[1716]: ACLs are not supported, ignoring. Dec 16 12:13:38.109688 systemd-tmpfiles[1716]: ACLs are not supported, ignoring. Dec 16 12:13:38.110000 audit: BPF prog-id=25 op=LOAD Dec 16 12:13:38.110000 audit: BPF prog-id=26 op=LOAD Dec 16 12:13:38.110000 audit: BPF prog-id=27 op=LOAD Dec 16 12:13:38.112087 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:13:38.120753 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:13:38.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.145925 systemd-nsresourced[1717]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:13:38.149975 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:13:38.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.163260 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:13:38.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.274744 kernel: loop2: detected capacity change from 0 to 211168 Dec 16 12:13:38.307531 kernel: loop3: detected capacity change from 0 to 100192 Dec 16 12:13:38.356713 systemd-oomd[1714]: No swap; memory pressure usage will be degraded Dec 16 12:13:38.357106 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:13:38.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.393577 systemd-resolved[1715]: Positive Trust Anchors: Dec 16 12:13:38.393592 systemd-resolved[1715]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:13:38.393594 systemd-resolved[1715]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:13:38.393613 systemd-resolved[1715]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:13:38.452669 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:13:38.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.458000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:13:38.458000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:13:38.459000 audit: BPF prog-id=28 op=LOAD Dec 16 12:13:38.459000 audit: BPF prog-id=29 op=LOAD Dec 16 12:13:38.460876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:13:38.488453 systemd-udevd[1738]: Using default interface naming scheme 'v257'. Dec 16 12:13:38.512353 systemd-resolved[1715]: Using system hostname 'ci-4547.0.0-a-4d45b340a5'. Dec 16 12:13:38.513636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:13:38.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.519588 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:13:38.692799 kernel: loop4: detected capacity change from 0 to 27544 Dec 16 12:13:38.694020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:13:38.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.703838 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:13:38.705000 audit: BPF prog-id=30 op=LOAD Dec 16 12:13:38.707795 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:13:38.746021 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:13:38.808508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#22 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:13:38.833548 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:13:38.837244 systemd-networkd[1754]: lo: Link UP Dec 16 12:13:38.837557 systemd-networkd[1754]: lo: Gained carrier Dec 16 12:13:38.839988 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:13:38.846252 systemd-networkd[1754]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:13:38.846262 systemd-networkd[1754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:13:38.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:38.848864 systemd[1]: Reached target network.target - Network. Dec 16 12:13:38.854626 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:13:38.866025 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:13:38.883934 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:13:38.889864 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:13:38.889894 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:13:38.893982 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:13:38.901191 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:13:38.912494 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:13:38.920648 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:13:38.920810 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:13:38.950517 kernel: mlx5_core f8ca:00:02.0 enP63690s1: Link up Dec 16 12:13:38.977503 kernel: hv_netvsc 000d3afb-252c-000d-3afb-252c000d3afb eth0: Data path switched to VF: enP63690s1 Dec 16 12:13:38.978737 systemd-networkd[1754]: enP63690s1: Link UP Dec 16 12:13:38.979084 systemd-networkd[1754]: eth0: Link UP Dec 16 12:13:38.979090 systemd-networkd[1754]: eth0: Gained carrier Dec 16 12:13:38.979109 systemd-networkd[1754]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:13:38.986202 systemd-networkd[1754]: enP63690s1: Gained carrier Dec 16 12:13:38.987877 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:13:38.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.001632 systemd-networkd[1754]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:13:39.010644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:39.021715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:13:39.021922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:39.031499 kernel: MACsec IEEE 802.1AE Dec 16 12:13:39.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.033879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:13:39.107558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:13:39.114719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:13:39.140511 kernel: loop5: detected capacity change from 0 to 45344 Dec 16 12:13:39.154494 kernel: loop6: detected capacity change from 0 to 211168 Dec 16 12:13:39.173504 kernel: loop7: detected capacity change from 0 to 100192 Dec 16 12:13:39.187499 kernel: loop1: detected capacity change from 0 to 27544 Dec 16 12:13:39.202539 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:13:39.204743 (sd-merge)[1866]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 12:13:39.207615 (sd-merge)[1866]: Merged extensions into '/usr'. Dec 16 12:13:39.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.217884 systemd[1]: Reload requested from client PID 1695 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:13:39.217897 systemd[1]: Reloading... Dec 16 12:13:39.260511 zram_generator::config[1899]: No configuration found. Dec 16 12:13:39.445137 systemd[1]: Reloading finished in 226 ms. Dec 16 12:13:39.473865 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:13:39.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.489066 systemd[1]: Starting ensure-sysext.service... Dec 16 12:13:39.495676 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:13:39.510435 kernel: kauditd_printk_skb: 69 callbacks suppressed Dec 16 12:13:39.510578 kernel: audit: type=1334 audit(1765887219.501:167): prog-id=31 op=LOAD Dec 16 12:13:39.501000 audit: BPF prog-id=31 op=LOAD Dec 16 12:13:39.501000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:13:39.516053 kernel: audit: type=1334 audit(1765887219.501:168): prog-id=22 op=UNLOAD Dec 16 12:13:39.505000 audit: BPF prog-id=32 op=LOAD Dec 16 12:13:39.521929 kernel: audit: type=1334 audit(1765887219.505:169): prog-id=32 op=LOAD Dec 16 12:13:39.510000 audit: BPF prog-id=33 op=LOAD Dec 16 12:13:39.527198 kernel: audit: type=1334 audit(1765887219.510:170): prog-id=33 op=LOAD Dec 16 12:13:39.510000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:13:39.532598 kernel: audit: type=1334 audit(1765887219.510:171): prog-id=23 op=UNLOAD Dec 16 12:13:39.532634 kernel: audit: type=1334 audit(1765887219.510:172): prog-id=24 op=UNLOAD Dec 16 12:13:39.510000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:13:39.521000 audit: BPF prog-id=34 op=LOAD Dec 16 12:13:39.542796 kernel: audit: type=1334 audit(1765887219.521:173): prog-id=34 op=LOAD Dec 16 12:13:39.545000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:13:39.551513 kernel: audit: type=1334 audit(1765887219.545:174): prog-id=21 op=UNLOAD Dec 16 12:13:39.551000 audit: BPF prog-id=35 op=LOAD Dec 16 12:13:39.558498 kernel: audit: type=1334 audit(1765887219.551:175): prog-id=35 op=LOAD Dec 16 12:13:39.551000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:13:39.556000 audit: BPF prog-id=36 op=LOAD Dec 16 12:13:39.557000 audit: BPF prog-id=37 op=LOAD Dec 16 12:13:39.557000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:13:39.557000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:13:39.557000 audit: BPF prog-id=38 op=LOAD Dec 16 12:13:39.557000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:13:39.557000 audit: BPF prog-id=39 op=LOAD Dec 16 12:13:39.557000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:13:39.557000 audit: BPF prog-id=40 op=LOAD Dec 16 12:13:39.558000 audit: BPF prog-id=41 op=LOAD Dec 16 12:13:39.558000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:13:39.558000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:13:39.565186 kernel: audit: type=1334 audit(1765887219.551:176): prog-id=25 op=UNLOAD Dec 16 12:13:39.565000 audit: BPF prog-id=42 op=LOAD Dec 16 12:13:39.565000 audit: BPF prog-id=43 op=LOAD Dec 16 12:13:39.565000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:13:39.565000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:13:39.565000 audit: BPF prog-id=44 op=LOAD Dec 16 12:13:39.565000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:13:39.565000 audit: BPF prog-id=45 op=LOAD Dec 16 12:13:39.565000 audit: BPF prog-id=46 op=LOAD Dec 16 12:13:39.565000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:13:39.565000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:13:39.568978 systemd-tmpfiles[1958]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:13:39.569000 systemd-tmpfiles[1958]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:13:39.569492 systemd-tmpfiles[1958]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:13:39.570138 systemd-tmpfiles[1958]: ACLs are not supported, ignoring. Dec 16 12:13:39.570175 systemd-tmpfiles[1958]: ACLs are not supported, ignoring. Dec 16 12:13:39.571575 systemd[1]: Reload requested from client PID 1957 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:13:39.571670 systemd[1]: Reloading... Dec 16 12:13:39.588881 systemd-tmpfiles[1958]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:13:39.588893 systemd-tmpfiles[1958]: Skipping /boot Dec 16 12:13:39.597297 systemd-tmpfiles[1958]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:13:39.597312 systemd-tmpfiles[1958]: Skipping /boot Dec 16 12:13:39.649816 zram_generator::config[1989]: No configuration found. Dec 16 12:13:39.805895 systemd[1]: Reloading finished in 233 ms. Dec 16 12:13:39.830041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:13:39.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.840000 audit: BPF prog-id=47 op=LOAD Dec 16 12:13:39.840000 audit: BPF prog-id=48 op=LOAD Dec 16 12:13:39.840000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:13:39.840000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:13:39.840000 audit: BPF prog-id=49 op=LOAD Dec 16 12:13:39.840000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:13:39.841000 audit: BPF prog-id=50 op=LOAD Dec 16 12:13:39.841000 audit: BPF prog-id=51 op=LOAD Dec 16 12:13:39.841000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:13:39.841000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:13:39.841000 audit: BPF prog-id=52 op=LOAD Dec 16 12:13:39.841000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:13:39.841000 audit: BPF prog-id=53 op=LOAD Dec 16 12:13:39.841000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:13:39.841000 audit: BPF prog-id=54 op=LOAD Dec 16 12:13:39.841000 audit: BPF prog-id=55 op=LOAD Dec 16 12:13:39.841000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:13:39.841000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:13:39.842000 audit: BPF prog-id=56 op=LOAD Dec 16 12:13:39.845000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:13:39.846000 audit: BPF prog-id=57 op=LOAD Dec 16 12:13:39.846000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:13:39.846000 audit: BPF prog-id=58 op=LOAD Dec 16 12:13:39.846000 audit: BPF prog-id=59 op=LOAD Dec 16 12:13:39.846000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:13:39.846000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:13:39.847000 audit: BPF prog-id=60 op=LOAD Dec 16 12:13:39.847000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:13:39.847000 audit: BPF prog-id=61 op=LOAD Dec 16 12:13:39.847000 audit: BPF prog-id=62 op=LOAD Dec 16 12:13:39.847000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:13:39.847000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:13:39.851606 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:13:39.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.868084 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:13:39.880572 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:13:39.886288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:13:39.887618 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:13:39.899995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:13:39.909783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:13:39.914752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:13:39.914920 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:13:39.916674 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:13:39.921213 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:13:39.923821 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:13:39.932761 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:13:39.941251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:13:39.942945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:13:39.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.950000 audit[2064]: SYSTEM_BOOT pid=2064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.950708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:13:39.950913 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:13:39.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.957655 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:13:39.957853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:13:39.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:39.973311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:13:39.974806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:13:39.985707 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:13:39.995076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:13:40.002684 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:13:40.007829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:13:40.008434 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:13:40.008624 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:13:40.008917 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:13:40.018670 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:13:40.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.026080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:13:40.026270 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:13:40.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.032767 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:13:40.032931 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:13:40.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.040290 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:13:40.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.047360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:13:40.047673 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:13:40.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.056094 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:13:40.056305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:13:40.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.066834 systemd[1]: Finished ensure-sysext.service. Dec 16 12:13:40.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:13:40.073980 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:13:40.074050 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:13:40.345000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:13:40.345000 audit[2092]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff8868680 a2=420 a3=0 items=0 ppid=2051 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:13:40.345000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:13:40.346767 augenrules[2092]: No rules Dec 16 12:13:40.348060 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:13:40.348320 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:13:40.767301 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:13:40.773744 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:13:40.808594 systemd-networkd[1754]: eth0: Gained IPv6LL Dec 16 12:13:40.811572 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:13:40.817846 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:13:45.076488 ldconfig[2062]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:13:45.087739 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:13:45.095199 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:13:45.112845 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:13:45.118020 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:13:45.123287 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:13:45.129198 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:13:45.135293 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:13:45.140088 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:13:45.146582 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:13:45.152620 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:13:45.158059 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:13:45.164553 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:13:45.164592 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:13:45.168412 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:13:45.173661 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:13:45.180550 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:13:45.187191 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:13:45.193641 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:13:45.198984 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:13:45.214235 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:13:45.219127 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:13:45.226021 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:13:45.231098 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:13:45.235799 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:13:45.240595 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:13:45.240617 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:13:45.242934 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:13:45.256316 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:13:45.263650 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:13:45.274536 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:13:45.276641 chronyd[2105]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:13:45.279920 chronyd[2105]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:13:45.280091 chronyd[2105]: Loaded seccomp filter (level 2) Dec 16 12:13:45.281651 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:13:45.294249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:13:45.301401 jq[2113]: false Dec 16 12:13:45.301693 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:13:45.306172 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:13:45.308651 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:13:45.314065 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:13:45.315740 KVP[2115]: KVP starting; pid is:2115 Dec 16 12:13:45.318602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:13:45.318766 KVP[2115]: KVP LIC Version: 3.1 Dec 16 12:13:45.324411 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:13:45.330403 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:13:45.336902 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:13:45.344995 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:13:45.352365 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:13:45.359769 extend-filesystems[2114]: Found /dev/sda6 Dec 16 12:13:45.360710 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:13:45.379693 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:13:45.384848 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:13:45.385723 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:13:45.386779 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:13:45.392843 extend-filesystems[2114]: Found /dev/sda9 Dec 16 12:13:45.397871 extend-filesystems[2114]: Checking size of /dev/sda9 Dec 16 12:13:45.403561 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:13:45.412284 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:13:45.417291 jq[2145]: true Dec 16 12:13:45.420969 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:13:45.428347 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:13:45.434518 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:13:45.437800 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:13:45.438003 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:13:45.446761 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:13:45.454906 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:13:45.457060 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:13:45.499159 extend-filesystems[2114]: Resized partition /dev/sda9 Dec 16 12:13:45.513142 update_engine[2135]: I20251216 12:13:45.502122 2135 main.cc:92] Flatcar Update Engine starting Dec 16 12:13:45.513337 jq[2155]: true Dec 16 12:13:45.522248 systemd-logind[2131]: New seat seat0. Dec 16 12:13:45.524587 systemd-logind[2131]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:13:45.524830 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:13:45.539558 extend-filesystems[2170]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:13:45.566783 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 12:13:45.566880 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Dec 16 12:13:45.595999 tar[2154]: linux-arm64/LICENSE Dec 16 12:13:45.596391 tar[2154]: linux-arm64/helm Dec 16 12:13:45.610251 extend-filesystems[2170]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 12:13:45.610251 extend-filesystems[2170]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 12:13:45.610251 extend-filesystems[2170]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Dec 16 12:13:45.666298 extend-filesystems[2114]: Resized filesystem in /dev/sda9 Dec 16 12:13:45.696179 bash[2201]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:13:45.612191 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:13:45.697236 update_engine[2135]: I20251216 12:13:45.692121 2135 update_check_scheduler.cc:74] Next update check in 10m3s Dec 16 12:13:45.682545 dbus-daemon[2109]: [system] SELinux support is enabled Dec 16 12:13:45.613138 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:13:45.659360 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:13:45.683595 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:13:45.705287 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:13:45.705749 dbus-daemon[2109]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:13:45.705399 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:13:45.705428 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:13:45.714755 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:13:45.714779 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:13:45.724282 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:13:45.733939 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:13:45.768131 coreos-metadata[2107]: Dec 16 12:13:45.767 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:13:45.778019 coreos-metadata[2107]: Dec 16 12:13:45.774 INFO Fetch successful Dec 16 12:13:45.778019 coreos-metadata[2107]: Dec 16 12:13:45.774 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:13:45.781812 coreos-metadata[2107]: Dec 16 12:13:45.781 INFO Fetch successful Dec 16 12:13:45.781812 coreos-metadata[2107]: Dec 16 12:13:45.781 INFO Fetching http://168.63.129.16/machine/6dbb698e-b3c8-4faf-ad73-a94a5ae0411b/86018621%2Df08e%2D4ad0%2D9591%2D7f04edd743ca.%5Fci%2D4547.0.0%2Da%2D4d45b340a5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:13:45.784926 coreos-metadata[2107]: Dec 16 12:13:45.784 INFO Fetch successful Dec 16 12:13:45.784926 coreos-metadata[2107]: Dec 16 12:13:45.784 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:13:45.803670 coreos-metadata[2107]: Dec 16 12:13:45.801 INFO Fetch successful Dec 16 12:13:45.856261 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:13:45.867348 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:13:45.931267 sshd_keygen[2134]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:13:45.952089 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:13:45.962572 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:13:45.969660 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:13:45.999269 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:13:46.000736 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:13:46.010405 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:13:46.031768 locksmithd[2244]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:13:46.051311 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:13:46.068221 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:13:46.077377 containerd[2156]: time="2025-12-16T12:13:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:13:46.078467 containerd[2156]: time="2025-12-16T12:13:46.077836496Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:13:46.084365 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:13:46.094451 containerd[2156]: time="2025-12-16T12:13:46.094408464Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.52µs" Dec 16 12:13:46.094451 containerd[2156]: time="2025-12-16T12:13:46.094445640Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:13:46.094912 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.096541456Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.096581832Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.096744040Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.096764400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.096821672Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.096829320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.097016216Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.097028936Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.097036416Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.097041488Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.097162000Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:13:46.101637 containerd[2156]: time="2025-12-16T12:13:46.097170080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:13:46.106742 containerd[2156]: time="2025-12-16T12:13:46.097226312Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:13:46.106742 containerd[2156]: time="2025-12-16T12:13:46.097360792Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:13:46.106742 containerd[2156]: time="2025-12-16T12:13:46.097378192Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:13:46.106742 containerd[2156]: time="2025-12-16T12:13:46.097388152Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:13:46.106742 containerd[2156]: time="2025-12-16T12:13:46.097418392Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:13:46.106742 containerd[2156]: time="2025-12-16T12:13:46.097842768Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:13:46.106742 containerd[2156]: time="2025-12-16T12:13:46.097946056Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:13:46.103850 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:13:46.120712 containerd[2156]: time="2025-12-16T12:13:46.120657312Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:13:46.121235 containerd[2156]: time="2025-12-16T12:13:46.120889888Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:13:46.121235 containerd[2156]: time="2025-12-16T12:13:46.121060968Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:13:46.121235 containerd[2156]: time="2025-12-16T12:13:46.121082296Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:13:46.121235 containerd[2156]: time="2025-12-16T12:13:46.121094264Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:13:46.121235 containerd[2156]: time="2025-12-16T12:13:46.121102832Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:13:46.121235 containerd[2156]: time="2025-12-16T12:13:46.121111952Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:13:46.121235 containerd[2156]: time="2025-12-16T12:13:46.121118600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:13:46.121235 containerd[2156]: time="2025-12-16T12:13:46.121127112Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:13:46.121235 containerd[2156]: time="2025-12-16T12:13:46.121136400Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:13:46.121791 containerd[2156]: time="2025-12-16T12:13:46.121416120Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:13:46.121791 containerd[2156]: time="2025-12-16T12:13:46.121442072Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:13:46.121791 containerd[2156]: time="2025-12-16T12:13:46.121450568Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:13:46.121791 containerd[2156]: time="2025-12-16T12:13:46.121461064Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:13:46.122048 containerd[2156]: time="2025-12-16T12:13:46.121960192Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:13:46.122048 containerd[2156]: time="2025-12-16T12:13:46.121996272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:13:46.122048 containerd[2156]: time="2025-12-16T12:13:46.122007488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:13:46.122048 containerd[2156]: time="2025-12-16T12:13:46.122014392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:13:46.122048 containerd[2156]: time="2025-12-16T12:13:46.122021744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:13:46.122048 containerd[2156]: time="2025-12-16T12:13:46.122028648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:13:46.122343 containerd[2156]: time="2025-12-16T12:13:46.122326168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:13:46.122672 containerd[2156]: time="2025-12-16T12:13:46.122387176Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:13:46.122672 containerd[2156]: time="2025-12-16T12:13:46.122609848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:13:46.122672 containerd[2156]: time="2025-12-16T12:13:46.122620760Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:13:46.122872 containerd[2156]: time="2025-12-16T12:13:46.122854720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:13:46.123029 containerd[2156]: time="2025-12-16T12:13:46.122958368Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:13:46.123564 containerd[2156]: time="2025-12-16T12:13:46.123463440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:13:46.123564 containerd[2156]: time="2025-12-16T12:13:46.123500864Z" level=info msg="Start snapshots syncer" Dec 16 12:13:46.123564 containerd[2156]: time="2025-12-16T12:13:46.123521712Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:13:46.124795 containerd[2156]: time="2025-12-16T12:13:46.124682192Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:13:46.124795 containerd[2156]: time="2025-12-16T12:13:46.124737064Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:13:46.125183 containerd[2156]: time="2025-12-16T12:13:46.125027664Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:13:46.125784 containerd[2156]: time="2025-12-16T12:13:46.125658072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:13:46.125784 containerd[2156]: time="2025-12-16T12:13:46.125735200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:13:46.125784 containerd[2156]: time="2025-12-16T12:13:46.125744232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:13:46.125784 containerd[2156]: time="2025-12-16T12:13:46.125750848Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:13:46.125784 containerd[2156]: time="2025-12-16T12:13:46.125760936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:13:46.126060 containerd[2156]: time="2025-12-16T12:13:46.126043304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:13:46.126564 containerd[2156]: time="2025-12-16T12:13:46.126450752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:13:46.126564 containerd[2156]: time="2025-12-16T12:13:46.126472240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:13:46.126564 containerd[2156]: time="2025-12-16T12:13:46.126498648Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:13:46.126564 containerd[2156]: time="2025-12-16T12:13:46.126527032Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:13:46.126564 containerd[2156]: time="2025-12-16T12:13:46.126538096Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:13:46.126564 containerd[2156]: time="2025-12-16T12:13:46.126544280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:13:46.126564 containerd[2156]: time="2025-12-16T12:13:46.126550552Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:13:46.127044 containerd[2156]: time="2025-12-16T12:13:46.126554952Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:13:46.127044 containerd[2156]: time="2025-12-16T12:13:46.126821576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:13:46.127044 containerd[2156]: time="2025-12-16T12:13:46.126834496Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:13:46.127044 containerd[2156]: time="2025-12-16T12:13:46.126849936Z" level=info msg="runtime interface created" Dec 16 12:13:46.127044 containerd[2156]: time="2025-12-16T12:13:46.126853256Z" level=info msg="created NRI interface" Dec 16 12:13:46.127044 containerd[2156]: time="2025-12-16T12:13:46.126858840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:13:46.128916 containerd[2156]: time="2025-12-16T12:13:46.128621176Z" level=info msg="Connect containerd service" Dec 16 12:13:46.128916 containerd[2156]: time="2025-12-16T12:13:46.128662544Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:13:46.131325 containerd[2156]: time="2025-12-16T12:13:46.131135472Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:13:46.206449 tar[2154]: linux-arm64/README.md Dec 16 12:13:46.220965 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:13:46.442687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:13:46.457544 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458053128Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458127440Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458147048Z" level=info msg="Start subscribing containerd event" Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458168808Z" level=info msg="Start recovering state" Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458244936Z" level=info msg="Start event monitor" Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458253800Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458258688Z" level=info msg="Start streaming server" Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458264576Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458269864Z" level=info msg="runtime interface starting up..." Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458273904Z" level=info msg="starting plugins..." Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458283624Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:13:46.459043 containerd[2156]: time="2025-12-16T12:13:46.458366640Z" level=info msg="containerd successfully booted in 0.381338s" Dec 16 12:13:46.458696 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:13:46.464531 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:13:46.471554 systemd[1]: Startup finished in 2.859s (kernel) + 12.399s (initrd) + 12.835s (userspace) = 28.094s. Dec 16 12:13:46.868367 kubelet[2323]: E1216 12:13:46.864470 2323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:13:46.871934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:13:46.872055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:13:46.872448 systemd[1]: kubelet.service: Consumed 575ms CPU time, 256.4M memory peak. Dec 16 12:13:47.096382 login[2300]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:13:47.096383 login[2301]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:13:47.105987 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:13:47.107405 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:13:47.110177 systemd-logind[2131]: New session 1 of user core. Dec 16 12:13:47.115902 systemd-logind[2131]: New session 2 of user core. Dec 16 12:13:47.139940 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:13:47.142185 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:13:47.154928 (systemd)[2338]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:13:47.157200 systemd-logind[2131]: New session 3 of user core. Dec 16 12:13:47.296251 systemd[2338]: Queued start job for default target default.target. Dec 16 12:13:47.302783 systemd[2338]: Created slice app.slice - User Application Slice. Dec 16 12:13:47.302814 systemd[2338]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:13:47.302824 systemd[2338]: Reached target paths.target - Paths. Dec 16 12:13:47.302870 systemd[2338]: Reached target timers.target - Timers. Dec 16 12:13:47.303934 systemd[2338]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:13:47.304515 systemd[2338]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:13:47.314554 systemd[2338]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:13:47.314813 systemd[2338]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:13:47.314860 systemd[2338]: Reached target sockets.target - Sockets. Dec 16 12:13:47.314899 systemd[2338]: Reached target basic.target - Basic System. Dec 16 12:13:47.314922 systemd[2338]: Reached target default.target - Main User Target. Dec 16 12:13:47.314943 systemd[2338]: Startup finished in 152ms. Dec 16 12:13:47.315291 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:13:47.316658 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:13:47.317334 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:13:47.686588 waagent[2298]: 2025-12-16T12:13:47.686513Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:13:47.691537 waagent[2298]: 2025-12-16T12:13:47.691484Z INFO Daemon Daemon OS: flatcar 4547.0.0 Dec 16 12:13:47.695092 waagent[2298]: 2025-12-16T12:13:47.695054Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:13:47.699119 waagent[2298]: 2025-12-16T12:13:47.699045Z INFO Daemon Daemon Run daemon Dec 16 12:13:47.702576 waagent[2298]: 2025-12-16T12:13:47.702532Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.0.0' Dec 16 12:13:47.709109 waagent[2298]: 2025-12-16T12:13:47.709070Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:13:47.713986 waagent[2298]: 2025-12-16T12:13:47.713946Z INFO Daemon Daemon Activate resource disk Dec 16 12:13:47.717283 waagent[2298]: 2025-12-16T12:13:47.717247Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:13:47.725777 waagent[2298]: 2025-12-16T12:13:47.725729Z INFO Daemon Daemon Found device: None Dec 16 12:13:47.729914 waagent[2298]: 2025-12-16T12:13:47.729877Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:13:47.737170 waagent[2298]: 2025-12-16T12:13:47.737132Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:13:47.747000 waagent[2298]: 2025-12-16T12:13:47.746959Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:13:47.751475 waagent[2298]: 2025-12-16T12:13:47.751444Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:13:47.760962 waagent[2298]: 2025-12-16T12:13:47.760897Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:13:47.772009 waagent[2298]: 2025-12-16T12:13:47.771958Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:13:47.779752 waagent[2298]: 2025-12-16T12:13:47.779709Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:13:47.783694 waagent[2298]: 2025-12-16T12:13:47.783663Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:13:47.863686 waagent[2298]: 2025-12-16T12:13:47.862998Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:13:47.890019 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:13:47.893504 waagent[2298]: 2025-12-16T12:13:47.891860Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:13:47.895983 waagent[2298]: 2025-12-16T12:13:47.895942Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:13:47.901477 waagent[2298]: 2025-12-16T12:13:47.901430Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:13:47.906583 waagent[2298]: 2025-12-16T12:13:47.906548Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:13:47.911083 waagent[2298]: 2025-12-16T12:13:47.911050Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:13:47.915665 waagent[2298]: 2025-12-16T12:13:47.915634Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:13:47.928638 waagent[2298]: 2025-12-16T12:13:47.928599Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:13:47.934332 waagent[2298]: 2025-12-16T12:13:47.934310Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:13:47.938661 waagent[2298]: 2025-12-16T12:13:47.938598Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:13:48.101100 waagent[2298]: 2025-12-16T12:13:48.101009Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:13:48.106493 waagent[2298]: 2025-12-16T12:13:48.106436Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:13:48.114732 waagent[2298]: 2025-12-16T12:13:48.114688Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:13:48.139622 waagent[2298]: 2025-12-16T12:13:48.139581Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:13:48.144738 waagent[2298]: 2025-12-16T12:13:48.144701Z INFO Daemon Dec 16 12:13:48.147094 waagent[2298]: 2025-12-16T12:13:48.147063Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: cf80b360-3806-4827-89e7-515838faeb69 eTag: 6944257759668697212 source: Fabric] Dec 16 12:13:48.156977 waagent[2298]: 2025-12-16T12:13:48.156941Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:13:48.161935 waagent[2298]: 2025-12-16T12:13:48.161904Z INFO Daemon Dec 16 12:13:48.165287 waagent[2298]: 2025-12-16T12:13:48.165256Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:13:48.176627 waagent[2298]: 2025-12-16T12:13:48.176444Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:13:48.243880 waagent[2298]: 2025-12-16T12:13:48.243739Z INFO Daemon Downloaded certificate {'thumbprint': '80C9F0F97906C21FDEB3C795F72C31F485CCBF08', 'hasPrivateKey': True} Dec 16 12:13:48.252115 waagent[2298]: 2025-12-16T12:13:48.252068Z INFO Daemon Fetch goal state completed Dec 16 12:13:48.263874 waagent[2298]: 2025-12-16T12:13:48.263836Z INFO Daemon Daemon Starting provisioning Dec 16 12:13:48.268364 waagent[2298]: 2025-12-16T12:13:48.268325Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:13:48.272729 waagent[2298]: 2025-12-16T12:13:48.272701Z INFO Daemon Daemon Set hostname [ci-4547.0.0-a-4d45b340a5] Dec 16 12:13:48.280744 waagent[2298]: 2025-12-16T12:13:48.280690Z INFO Daemon Daemon Publish hostname [ci-4547.0.0-a-4d45b340a5] Dec 16 12:13:48.286657 waagent[2298]: 2025-12-16T12:13:48.286601Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:13:48.292838 waagent[2298]: 2025-12-16T12:13:48.292783Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:13:48.303500 systemd-networkd[1754]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:13:48.303512 systemd-networkd[1754]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:13:48.303597 systemd-networkd[1754]: eth0: DHCP lease lost Dec 16 12:13:48.316971 waagent[2298]: 2025-12-16T12:13:48.316883Z INFO Daemon Daemon Create user account if not exists Dec 16 12:13:48.322369 waagent[2298]: 2025-12-16T12:13:48.322305Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:13:48.327676 waagent[2298]: 2025-12-16T12:13:48.327617Z INFO Daemon Daemon Configure sudoer Dec 16 12:13:48.336563 systemd-networkd[1754]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:13:48.341861 waagent[2298]: 2025-12-16T12:13:48.341780Z INFO Daemon Daemon Configure sshd Dec 16 12:13:48.349517 waagent[2298]: 2025-12-16T12:13:48.349431Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:13:48.359626 waagent[2298]: 2025-12-16T12:13:48.359574Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:13:49.497577 waagent[2298]: 2025-12-16T12:13:49.497525Z INFO Daemon Daemon Provisioning complete Dec 16 12:13:49.514846 waagent[2298]: 2025-12-16T12:13:49.514800Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:13:49.520888 waagent[2298]: 2025-12-16T12:13:49.520837Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:13:49.529318 waagent[2298]: 2025-12-16T12:13:49.529272Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:13:49.631522 waagent[2391]: 2025-12-16T12:13:49.631429Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:13:49.632532 waagent[2391]: 2025-12-16T12:13:49.631917Z INFO ExtHandler ExtHandler OS: flatcar 4547.0.0 Dec 16 12:13:49.632532 waagent[2391]: 2025-12-16T12:13:49.631972Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:13:49.632532 waagent[2391]: 2025-12-16T12:13:49.632012Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:13:49.667367 waagent[2391]: 2025-12-16T12:13:49.667281Z INFO ExtHandler ExtHandler Distro: flatcar-4547.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:13:49.667556 waagent[2391]: 2025-12-16T12:13:49.667525Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:13:49.667602 waagent[2391]: 2025-12-16T12:13:49.667583Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:13:49.674712 waagent[2391]: 2025-12-16T12:13:49.674659Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:13:49.682162 waagent[2391]: 2025-12-16T12:13:49.682127Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:13:49.682654 waagent[2391]: 2025-12-16T12:13:49.682619Z INFO ExtHandler Dec 16 12:13:49.682713 waagent[2391]: 2025-12-16T12:13:49.682694Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d30d8c6e-e001-47ce-855d-e9cef703ffe4 eTag: 6944257759668697212 source: Fabric] Dec 16 12:13:49.682945 waagent[2391]: 2025-12-16T12:13:49.682918Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:13:49.683354 waagent[2391]: 2025-12-16T12:13:49.683325Z INFO ExtHandler Dec 16 12:13:49.683394 waagent[2391]: 2025-12-16T12:13:49.683377Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:13:49.688130 waagent[2391]: 2025-12-16T12:13:49.688098Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:13:49.748165 waagent[2391]: 2025-12-16T12:13:49.748023Z INFO ExtHandler Downloaded certificate {'thumbprint': '80C9F0F97906C21FDEB3C795F72C31F485CCBF08', 'hasPrivateKey': True} Dec 16 12:13:49.748557 waagent[2391]: 2025-12-16T12:13:49.748518Z INFO ExtHandler Fetch goal state completed Dec 16 12:13:49.762836 waagent[2391]: 2025-12-16T12:13:49.762774Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Dec 16 12:13:49.766414 waagent[2391]: 2025-12-16T12:13:49.766364Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2391 Dec 16 12:13:49.766542 waagent[2391]: 2025-12-16T12:13:49.766517Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:13:49.766795 waagent[2391]: 2025-12-16T12:13:49.766766Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:13:49.767888 waagent[2391]: 2025-12-16T12:13:49.767851Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:13:49.768210 waagent[2391]: 2025-12-16T12:13:49.768179Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:13:49.768323 waagent[2391]: 2025-12-16T12:13:49.768302Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:13:49.768787 waagent[2391]: 2025-12-16T12:13:49.768756Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:13:49.815708 waagent[2391]: 2025-12-16T12:13:49.815260Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:13:49.815708 waagent[2391]: 2025-12-16T12:13:49.815465Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:13:49.820201 waagent[2391]: 2025-12-16T12:13:49.820163Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:13:49.825529 systemd[1]: Reload requested from client PID 2406 ('systemctl') (unit waagent.service)... Dec 16 12:13:49.825765 systemd[1]: Reloading... Dec 16 12:13:49.891579 zram_generator::config[2448]: No configuration found. Dec 16 12:13:50.055648 systemd[1]: Reloading finished in 229 ms. Dec 16 12:13:50.081690 waagent[2391]: 2025-12-16T12:13:50.081607Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:13:50.081790 waagent[2391]: 2025-12-16T12:13:50.081772Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:13:50.429522 waagent[2391]: 2025-12-16T12:13:50.429287Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:13:50.429679 waagent[2391]: 2025-12-16T12:13:50.429640Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:13:50.430358 waagent[2391]: 2025-12-16T12:13:50.430315Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:13:50.430725 waagent[2391]: 2025-12-16T12:13:50.430599Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:13:50.430893 waagent[2391]: 2025-12-16T12:13:50.430861Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:13:50.431013 waagent[2391]: 2025-12-16T12:13:50.430985Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:13:50.431044 waagent[2391]: 2025-12-16T12:13:50.431018Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:13:50.431256 waagent[2391]: 2025-12-16T12:13:50.431229Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:13:50.431369 waagent[2391]: 2025-12-16T12:13:50.431348Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:13:50.431589 waagent[2391]: 2025-12-16T12:13:50.431554Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:13:50.431627 waagent[2391]: 2025-12-16T12:13:50.431592Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:13:50.431870 waagent[2391]: 2025-12-16T12:13:50.431806Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:13:50.432902 waagent[2391]: 2025-12-16T12:13:50.432513Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:13:50.432902 waagent[2391]: 2025-12-16T12:13:50.432675Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:13:50.432902 waagent[2391]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:13:50.432902 waagent[2391]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:13:50.432902 waagent[2391]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:13:50.432902 waagent[2391]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:13:50.432902 waagent[2391]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:13:50.432902 waagent[2391]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:13:50.433412 waagent[2391]: 2025-12-16T12:13:50.433393Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:13:50.433621 waagent[2391]: 2025-12-16T12:13:50.433589Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:13:50.433758 waagent[2391]: 2025-12-16T12:13:50.433743Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:13:50.433834 waagent[2391]: 2025-12-16T12:13:50.433820Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:13:50.438509 waagent[2391]: 2025-12-16T12:13:50.438440Z INFO ExtHandler ExtHandler Dec 16 12:13:50.438565 waagent[2391]: 2025-12-16T12:13:50.438537Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a4f21c45-6df4-4e47-8dfb-8f6ccaea4696 correlation e575cb3b-5cc8-464b-8523-8924ad10e7d6 created: 2025-12-16T12:12:51.525223Z] Dec 16 12:13:50.438860 waagent[2391]: 2025-12-16T12:13:50.438821Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:13:50.439263 waagent[2391]: 2025-12-16T12:13:50.439232Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 12:13:50.471583 waagent[2391]: 2025-12-16T12:13:50.471101Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:13:50.471583 waagent[2391]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:13:50.471583 waagent[2391]: 2025-12-16T12:13:50.471496Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 338631CF-72DD-4991-96CE-0B1EE96CE7D1;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:13:50.518374 waagent[2391]: 2025-12-16T12:13:50.518295Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:13:50.518374 waagent[2391]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:13:50.518374 waagent[2391]: pkts bytes target prot opt in out source destination Dec 16 12:13:50.518374 waagent[2391]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:13:50.518374 waagent[2391]: pkts bytes target prot opt in out source destination Dec 16 12:13:50.518374 waagent[2391]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:13:50.518374 waagent[2391]: pkts bytes target prot opt in out source destination Dec 16 12:13:50.518374 waagent[2391]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:13:50.518374 waagent[2391]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:13:50.518374 waagent[2391]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:13:50.520949 waagent[2391]: 2025-12-16T12:13:50.520889Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:13:50.520949 waagent[2391]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:13:50.520949 waagent[2391]: pkts bytes target prot opt in out source destination Dec 16 12:13:50.520949 waagent[2391]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:13:50.520949 waagent[2391]: pkts bytes target prot opt in out source destination Dec 16 12:13:50.520949 waagent[2391]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:13:50.520949 waagent[2391]: pkts bytes target prot opt in out source destination Dec 16 12:13:50.520949 waagent[2391]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:13:50.520949 waagent[2391]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:13:50.520949 waagent[2391]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:13:50.521183 waagent[2391]: 2025-12-16T12:13:50.521155Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:13:50.548955 waagent[2391]: 2025-12-16T12:13:50.548611Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:13:50.548955 waagent[2391]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:13:50.548955 waagent[2391]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:13:50.548955 waagent[2391]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:25:2c brd ff:ff:ff:ff:ff:ff\ altname enx000d3afb252c Dec 16 12:13:50.548955 waagent[2391]: 3: enP63690s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:25:2c brd ff:ff:ff:ff:ff:ff\ altname enP63690p0s2 Dec 16 12:13:50.548955 waagent[2391]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:13:50.548955 waagent[2391]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:13:50.548955 waagent[2391]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:13:50.548955 waagent[2391]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:13:50.548955 waagent[2391]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:13:50.548955 waagent[2391]: 2: eth0 inet6 fe80::20d:3aff:fefb:252c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:13:57.072914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:13:57.074581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:13:57.179602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:13:57.184868 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:13:57.300168 kubelet[2543]: E1216 12:13:57.300081 2543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:13:57.302739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:13:57.302854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:13:57.303202 systemd[1]: kubelet.service: Consumed 117ms CPU time, 104.7M memory peak. Dec 16 12:14:07.323293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:14:07.324775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:07.651669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:07.654568 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:14:07.681075 kubelet[2557]: E1216 12:14:07.681022 2557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:14:07.683594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:14:07.683811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:14:07.684426 systemd[1]: kubelet.service: Consumed 111ms CPU time, 106.8M memory peak. Dec 16 12:14:07.712642 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:14:07.714154 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:38732.service - OpenSSH per-connection server daemon (10.200.16.10:38732). Dec 16 12:14:08.348199 sshd[2564]: Accepted publickey for core from 10.200.16.10 port 38732 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:08.349310 sshd-session[2564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:08.352920 systemd-logind[2131]: New session 4 of user core. Dec 16 12:14:08.358623 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:14:08.649414 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:38746.service - OpenSSH per-connection server daemon (10.200.16.10:38746). Dec 16 12:14:09.071607 sshd[2571]: Accepted publickey for core from 10.200.16.10 port 38746 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:09.072615 sshd-session[2571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:09.074813 chronyd[2105]: Selected source PHC0 Dec 16 12:14:09.076874 systemd-logind[2131]: New session 5 of user core. Dec 16 12:14:09.088799 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:14:09.308352 sshd[2575]: Connection closed by 10.200.16.10 port 38746 Dec 16 12:14:09.309015 sshd-session[2571]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:09.312724 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:38746.service: Deactivated successfully. Dec 16 12:14:09.314278 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:14:09.314991 systemd-logind[2131]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:14:09.316220 systemd-logind[2131]: Removed session 5. Dec 16 12:14:09.400453 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:38760.service - OpenSSH per-connection server daemon (10.200.16.10:38760). Dec 16 12:14:09.827253 sshd[2581]: Accepted publickey for core from 10.200.16.10 port 38760 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:09.828442 sshd-session[2581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:09.832394 systemd-logind[2131]: New session 6 of user core. Dec 16 12:14:09.840667 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:14:10.061210 sshd[2585]: Connection closed by 10.200.16.10 port 38760 Dec 16 12:14:10.061107 sshd-session[2581]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:10.064971 systemd-logind[2131]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:14:10.065589 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:38760.service: Deactivated successfully. Dec 16 12:14:10.067842 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:14:10.070781 systemd-logind[2131]: Removed session 6. Dec 16 12:14:10.148413 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:41184.service - OpenSSH per-connection server daemon (10.200.16.10:41184). Dec 16 12:14:10.568224 sshd[2591]: Accepted publickey for core from 10.200.16.10 port 41184 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:10.569287 sshd-session[2591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:10.573682 systemd-logind[2131]: New session 7 of user core. Dec 16 12:14:10.583645 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:14:10.803630 sshd[2595]: Connection closed by 10.200.16.10 port 41184 Dec 16 12:14:10.804181 sshd-session[2591]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:10.807988 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:41184.service: Deactivated successfully. Dec 16 12:14:10.808125 systemd-logind[2131]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:14:10.809910 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:14:10.811455 systemd-logind[2131]: Removed session 7. Dec 16 12:14:10.882502 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:41190.service - OpenSSH per-connection server daemon (10.200.16.10:41190). Dec 16 12:14:11.272861 sshd[2601]: Accepted publickey for core from 10.200.16.10 port 41190 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:11.273981 sshd-session[2601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:11.278228 systemd-logind[2131]: New session 8 of user core. Dec 16 12:14:11.286658 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:14:11.546888 sudo[2606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:14:11.547098 sudo[2606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:14:11.572888 sudo[2606]: pam_unix(sudo:session): session closed for user root Dec 16 12:14:11.644208 sshd[2605]: Connection closed by 10.200.16.10 port 41190 Dec 16 12:14:11.644063 sshd-session[2601]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:11.647174 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:41190.service: Deactivated successfully. Dec 16 12:14:11.648750 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:14:11.651253 systemd-logind[2131]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:14:11.652055 systemd-logind[2131]: Removed session 8. Dec 16 12:14:11.735584 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:41194.service - OpenSSH per-connection server daemon (10.200.16.10:41194). Dec 16 12:14:12.123931 sshd[2613]: Accepted publickey for core from 10.200.16.10 port 41194 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:12.125122 sshd-session[2613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:12.128844 systemd-logind[2131]: New session 9 of user core. Dec 16 12:14:12.134794 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:14:12.270889 sudo[2619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:14:12.271098 sudo[2619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:14:12.278626 sudo[2619]: pam_unix(sudo:session): session closed for user root Dec 16 12:14:12.283354 sudo[2618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:14:12.283616 sudo[2618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:14:12.289683 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:14:12.318000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:14:12.319691 augenrules[2643]: No rules Dec 16 12:14:12.321880 kernel: kauditd_printk_skb: 77 callbacks suppressed Dec 16 12:14:12.321928 kernel: audit: type=1305 audit(1765887252.318:252): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:14:12.329764 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:14:12.329974 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:14:12.318000 audit[2643]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffbe5e320 a2=420 a3=0 items=0 ppid=2624 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:12.332690 sudo[2618]: pam_unix(sudo:session): session closed for user root Dec 16 12:14:12.347922 kernel: audit: type=1300 audit(1765887252.318:252): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffbe5e320 a2=420 a3=0 items=0 ppid=2624 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:12.318000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:14:12.355764 kernel: audit: type=1327 audit(1765887252.318:252): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:14:12.355820 kernel: audit: type=1130 audit(1765887252.329:253): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.379464 kernel: audit: type=1131 audit(1765887252.329:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.379527 kernel: audit: type=1106 audit(1765887252.329:255): pid=2618 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.329000 audit[2618]: USER_END pid=2618 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.329000 audit[2618]: CRED_DISP pid=2618 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.404974 kernel: audit: type=1104 audit(1765887252.329:256): pid=2618 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.418195 sshd[2617]: Connection closed by 10.200.16.10 port 41194 Dec 16 12:14:12.419675 sshd-session[2613]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:12.420000 audit[2613]: USER_END pid=2613 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:12.420000 audit[2613]: CRED_DISP pid=2613 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:12.440771 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:41194.service: Deactivated successfully. Dec 16 12:14:12.448389 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:14:12.455551 kernel: audit: type=1106 audit(1765887252.420:257): pid=2613 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:12.455639 kernel: audit: type=1104 audit(1765887252.420:258): pid=2613 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:12.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.37:22-10.200.16.10:41194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.469139 kernel: audit: type=1131 audit(1765887252.446:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.37:22-10.200.16.10:41194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.469497 systemd-logind[2131]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:14:12.470390 systemd-logind[2131]: Removed session 9. Dec 16 12:14:12.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:41210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:12.499970 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:41210.service - OpenSSH per-connection server daemon (10.200.16.10:41210). Dec 16 12:14:12.889000 audit[2652]: USER_ACCT pid=2652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:12.890689 sshd[2652]: Accepted publickey for core from 10.200.16.10 port 41210 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:14:12.890000 audit[2652]: CRED_ACQ pid=2652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:12.890000 audit[2652]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa632860 a2=3 a3=0 items=0 ppid=1 pid=2652 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:12.890000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:14:12.891920 sshd-session[2652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:14:12.895645 systemd-logind[2131]: New session 10 of user core. Dec 16 12:14:12.905659 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:14:12.907000 audit[2652]: USER_START pid=2652 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:12.909000 audit[2656]: CRED_ACQ pid=2656 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:13.036875 sudo[2657]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:14:13.036000 audit[2657]: USER_ACCT pid=2657 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:13.036000 audit[2657]: CRED_REFR pid=2657 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:13.037603 sudo[2657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:14:13.037000 audit[2657]: USER_START pid=2657 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:14.057524 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:14:14.074763 (dockerd)[2675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:14:14.922484 dockerd[2675]: time="2025-12-16T12:14:14.920685881Z" level=info msg="Starting up" Dec 16 12:14:14.924007 dockerd[2675]: time="2025-12-16T12:14:14.923557889Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:14:14.931750 dockerd[2675]: time="2025-12-16T12:14:14.931721761Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:14:14.961883 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4132573780-merged.mount: Deactivated successfully. Dec 16 12:14:15.009299 dockerd[2675]: time="2025-12-16T12:14:15.009255345Z" level=info msg="Loading containers: start." Dec 16 12:14:15.038524 kernel: Initializing XFRM netlink socket Dec 16 12:14:15.108000 audit[2723]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.108000 audit[2723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdddd04a0 a2=0 a3=0 items=0 ppid=2675 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:14:15.110000 audit[2725]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.110000 audit[2725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcce6ee70 a2=0 a3=0 items=0 ppid=2675 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.110000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:14:15.112000 audit[2727]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2727 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.112000 audit[2727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd235b9d0 a2=0 a3=0 items=0 ppid=2675 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.112000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:14:15.113000 audit[2729]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.113000 audit[2729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb303470 a2=0 a3=0 items=0 ppid=2675 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.113000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:14:15.115000 audit[2731]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.115000 audit[2731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc73ad860 a2=0 a3=0 items=0 ppid=2675 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.115000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:14:15.117000 audit[2733]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2733 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.117000 audit[2733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff665e3d0 a2=0 a3=0 items=0 ppid=2675 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.117000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:14:15.118000 audit[2735]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.118000 audit[2735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcc0a9370 a2=0 a3=0 items=0 ppid=2675 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.118000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:14:15.120000 audit[2737]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.120000 audit[2737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd9149e70 a2=0 a3=0 items=0 ppid=2675 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.120000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:14:15.178000 audit[2740]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.178000 audit[2740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd164a190 a2=0 a3=0 items=0 ppid=2675 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:14:15.180000 audit[2742]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.180000 audit[2742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffc8d1e40 a2=0 a3=0 items=0 ppid=2675 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:14:15.181000 audit[2744]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.181000 audit[2744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffff300700 a2=0 a3=0 items=0 ppid=2675 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:14:15.183000 audit[2746]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.183000 audit[2746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe3dfa630 a2=0 a3=0 items=0 ppid=2675 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:14:15.185000 audit[2748]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.185000 audit[2748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffee4cb160 a2=0 a3=0 items=0 ppid=2675 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:14:15.239000 audit[2778]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.239000 audit[2778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdb8d7af0 a2=0 a3=0 items=0 ppid=2675 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:14:15.241000 audit[2780]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.241000 audit[2780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd0b873f0 a2=0 a3=0 items=0 ppid=2675 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:14:15.243000 audit[2782]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.243000 audit[2782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6bb36f0 a2=0 a3=0 items=0 ppid=2675 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:14:15.244000 audit[2784]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.244000 audit[2784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd296e500 a2=0 a3=0 items=0 ppid=2675 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:14:15.246000 audit[2786]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.246000 audit[2786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee4ee390 a2=0 a3=0 items=0 ppid=2675 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:14:15.247000 audit[2788]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.247000 audit[2788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc1d40d50 a2=0 a3=0 items=0 ppid=2675 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.247000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:14:15.249000 audit[2790]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.249000 audit[2790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeb27e8e0 a2=0 a3=0 items=0 ppid=2675 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:14:15.250000 audit[2792]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.250000 audit[2792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc35c9340 a2=0 a3=0 items=0 ppid=2675 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:14:15.252000 audit[2794]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.252000 audit[2794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffe00afd50 a2=0 a3=0 items=0 ppid=2675 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:14:15.254000 audit[2796]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.254000 audit[2796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffc7c7b50 a2=0 a3=0 items=0 ppid=2675 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:14:15.255000 audit[2798]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.255000 audit[2798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc82d15c0 a2=0 a3=0 items=0 ppid=2675 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.255000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:14:15.257000 audit[2800]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.257000 audit[2800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffdd39a1f0 a2=0 a3=0 items=0 ppid=2675 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:14:15.258000 audit[2802]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.258000 audit[2802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffed0bdf30 a2=0 a3=0 items=0 ppid=2675 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:14:15.262000 audit[2807]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.262000 audit[2807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffec7fc110 a2=0 a3=0 items=0 ppid=2675 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:14:15.263000 audit[2809]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.263000 audit[2809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffeca96180 a2=0 a3=0 items=0 ppid=2675 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.263000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:14:15.265000 audit[2811]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2811 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.265000 audit[2811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffc769600 a2=0 a3=0 items=0 ppid=2675 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:14:15.267000 audit[2813]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2813 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.267000 audit[2813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdc378ec0 a2=0 a3=0 items=0 ppid=2675 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:14:15.268000 audit[2815]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.268000 audit[2815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc1b0b940 a2=0 a3=0 items=0 ppid=2675 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:14:15.270000 audit[2817]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:15.270000 audit[2817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc133c140 a2=0 a3=0 items=0 ppid=2675 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:14:15.331000 audit[2822]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.331000 audit[2822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffdde462b0 a2=0 a3=0 items=0 ppid=2675 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.331000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:14:15.333000 audit[2824]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2824 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.333000 audit[2824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc0d146b0 a2=0 a3=0 items=0 ppid=2675 pid=2824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:14:15.340000 audit[2832]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.340000 audit[2832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffeae70bf0 a2=0 a3=0 items=0 ppid=2675 pid=2832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:14:15.344000 audit[2837]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.344000 audit[2837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd3377010 a2=0 a3=0 items=0 ppid=2675 pid=2837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.344000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:14:15.346000 audit[2839]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.346000 audit[2839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffd9f4df20 a2=0 a3=0 items=0 ppid=2675 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:14:15.348000 audit[2841]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.348000 audit[2841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe88cdda0 a2=0 a3=0 items=0 ppid=2675 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.348000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:14:15.349000 audit[2843]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.349000 audit[2843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffed145420 a2=0 a3=0 items=0 ppid=2675 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.349000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:14:15.351000 audit[2845]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:15.351000 audit[2845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffc3d05e0 a2=0 a3=0 items=0 ppid=2675 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:15.351000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:14:15.352548 systemd-networkd[1754]: docker0: Link UP Dec 16 12:14:15.373504 dockerd[2675]: time="2025-12-16T12:14:15.373400113Z" level=info msg="Loading containers: done." Dec 16 12:14:15.441523 dockerd[2675]: time="2025-12-16T12:14:15.441362681Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:14:15.441523 dockerd[2675]: time="2025-12-16T12:14:15.441514521Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:14:15.441684 dockerd[2675]: time="2025-12-16T12:14:15.441629489Z" level=info msg="Initializing buildkit" Dec 16 12:14:15.484836 dockerd[2675]: time="2025-12-16T12:14:15.484793841Z" level=info msg="Completed buildkit initialization" Dec 16 12:14:15.490617 dockerd[2675]: time="2025-12-16T12:14:15.490524401Z" level=info msg="Daemon has completed initialization" Dec 16 12:14:15.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:15.491387 dockerd[2675]: time="2025-12-16T12:14:15.490742769Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:14:15.491119 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:14:16.541167 containerd[2156]: time="2025-12-16T12:14:16.541128161Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:14:17.822901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:14:17.824167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:17.942641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531110019.mount: Deactivated successfully. Dec 16 12:14:18.361728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:18.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:18.364997 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:14:18.365073 kernel: audit: type=1130 audit(1765887258.361:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:18.379418 (kubelet)[2898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:14:18.411231 kubelet[2898]: E1216 12:14:18.411188 2898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:14:18.413751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:14:18.413998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:14:18.416620 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107.3M memory peak. Dec 16 12:14:18.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:14:18.430510 kernel: audit: type=1131 audit(1765887258.416:311): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:14:19.255037 containerd[2156]: time="2025-12-16T12:14:19.254979235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:19.258541 containerd[2156]: time="2025-12-16T12:14:19.258500389Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25822401" Dec 16 12:14:19.261674 containerd[2156]: time="2025-12-16T12:14:19.261636061Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:19.266184 containerd[2156]: time="2025-12-16T12:14:19.266142505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:19.266681 containerd[2156]: time="2025-12-16T12:14:19.266655584Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.725490062s" Dec 16 12:14:19.266733 containerd[2156]: time="2025-12-16T12:14:19.266688436Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 12:14:19.267945 containerd[2156]: time="2025-12-16T12:14:19.267896637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:14:20.668832 containerd[2156]: time="2025-12-16T12:14:20.668774743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:20.674177 containerd[2156]: time="2025-12-16T12:14:20.674127165Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Dec 16 12:14:20.678100 containerd[2156]: time="2025-12-16T12:14:20.677996012Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:20.682217 containerd[2156]: time="2025-12-16T12:14:20.682173485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:20.682829 containerd[2156]: time="2025-12-16T12:14:20.682725344Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.41480164s" Dec 16 12:14:20.682829 containerd[2156]: time="2025-12-16T12:14:20.682752611Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 12:14:20.683353 containerd[2156]: time="2025-12-16T12:14:20.683267330Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:14:22.403993 containerd[2156]: time="2025-12-16T12:14:22.403934208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:22.409138 containerd[2156]: time="2025-12-16T12:14:22.408949098Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 16 12:14:22.413290 containerd[2156]: time="2025-12-16T12:14:22.413261945Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:22.418614 containerd[2156]: time="2025-12-16T12:14:22.418563962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:22.419257 containerd[2156]: time="2025-12-16T12:14:22.418840600Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.735547171s" Dec 16 12:14:22.419257 containerd[2156]: time="2025-12-16T12:14:22.418869571Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 12:14:22.419457 containerd[2156]: time="2025-12-16T12:14:22.419348975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:14:23.560108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297507280.mount: Deactivated successfully. Dec 16 12:14:23.837234 containerd[2156]: time="2025-12-16T12:14:23.836795345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:23.840671 containerd[2156]: time="2025-12-16T12:14:23.840616884Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Dec 16 12:14:23.843998 containerd[2156]: time="2025-12-16T12:14:23.843974660Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:23.847725 containerd[2156]: time="2025-12-16T12:14:23.847685970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:23.848143 containerd[2156]: time="2025-12-16T12:14:23.847936349Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.428561315s" Dec 16 12:14:23.848143 containerd[2156]: time="2025-12-16T12:14:23.847964944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 12:14:23.848373 containerd[2156]: time="2025-12-16T12:14:23.848351794Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:14:24.526088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703020280.mount: Deactivated successfully. Dec 16 12:14:25.305699 containerd[2156]: time="2025-12-16T12:14:25.305635683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:25.309024 containerd[2156]: time="2025-12-16T12:14:25.308823890Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Dec 16 12:14:25.312031 containerd[2156]: time="2025-12-16T12:14:25.312006297Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:25.316589 containerd[2156]: time="2025-12-16T12:14:25.316557993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:25.317302 containerd[2156]: time="2025-12-16T12:14:25.317274994Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.468897381s" Dec 16 12:14:25.317386 containerd[2156]: time="2025-12-16T12:14:25.317372837Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 12:14:25.317936 containerd[2156]: time="2025-12-16T12:14:25.317906001Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:14:25.860832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount813318664.mount: Deactivated successfully. Dec 16 12:14:25.880262 containerd[2156]: time="2025-12-16T12:14:25.880211831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:14:25.884058 containerd[2156]: time="2025-12-16T12:14:25.884010083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=886" Dec 16 12:14:25.887433 containerd[2156]: time="2025-12-16T12:14:25.887404273Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:14:25.891811 containerd[2156]: time="2025-12-16T12:14:25.891781142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:14:25.892566 containerd[2156]: time="2025-12-16T12:14:25.892055149Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 574.120721ms" Dec 16 12:14:25.892566 containerd[2156]: time="2025-12-16T12:14:25.892077368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:14:25.892743 containerd[2156]: time="2025-12-16T12:14:25.892703574Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:14:26.524579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390102237.mount: Deactivated successfully. Dec 16 12:14:27.064747 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:14:28.572870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:14:28.574131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:28.783662 containerd[2156]: time="2025-12-16T12:14:28.783601348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:28.850255 containerd[2156]: time="2025-12-16T12:14:28.850030975Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=68134882" Dec 16 12:14:28.895630 containerd[2156]: time="2025-12-16T12:14:28.895416807Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:28.907516 containerd[2156]: time="2025-12-16T12:14:28.903969682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:28.907516 containerd[2156]: time="2025-12-16T12:14:28.904663545Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.011919006s" Dec 16 12:14:28.907516 containerd[2156]: time="2025-12-16T12:14:28.904687243Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 12:14:29.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:29.145384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:29.159512 kernel: audit: type=1130 audit(1765887269.144:312): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:29.163738 (kubelet)[3097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:14:29.435643 kubelet[3097]: E1216 12:14:29.435515 3097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:14:29.437951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:14:29.438186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:14:29.438822 systemd[1]: kubelet.service: Consumed 109ms CPU time, 105.2M memory peak. Dec 16 12:14:29.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:14:29.453526 kernel: audit: type=1131 audit(1765887269.437:313): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:14:31.394498 update_engine[2135]: I20251216 12:14:31.393508 2135 update_attempter.cc:509] Updating boot flags... Dec 16 12:14:31.791797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:31.792147 systemd[1]: kubelet.service: Consumed 109ms CPU time, 105.2M memory peak. Dec 16 12:14:31.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:31.796674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:31.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:31.820701 kernel: audit: type=1130 audit(1765887271.790:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:31.820793 kernel: audit: type=1131 audit(1765887271.790:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:31.832676 systemd[1]: Reload requested from client PID 3189 ('systemctl') (unit session-10.scope)... Dec 16 12:14:31.832690 systemd[1]: Reloading... Dec 16 12:14:31.935584 zram_generator::config[3241]: No configuration found. Dec 16 12:14:32.094343 systemd[1]: Reloading finished in 261 ms. Dec 16 12:14:32.120000 audit: BPF prog-id=87 op=LOAD Dec 16 12:14:32.120000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:14:32.131989 kernel: audit: type=1334 audit(1765887272.120:316): prog-id=87 op=LOAD Dec 16 12:14:32.132051 kernel: audit: type=1334 audit(1765887272.120:317): prog-id=67 op=UNLOAD Dec 16 12:14:32.126000 audit: BPF prog-id=88 op=LOAD Dec 16 12:14:32.136494 kernel: audit: type=1334 audit(1765887272.126:318): prog-id=88 op=LOAD Dec 16 12:14:32.126000 audit: BPF prog-id=89 op=LOAD Dec 16 12:14:32.140895 kernel: audit: type=1334 audit(1765887272.126:319): prog-id=89 op=LOAD Dec 16 12:14:32.126000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:14:32.145587 kernel: audit: type=1334 audit(1765887272.126:320): prog-id=68 op=UNLOAD Dec 16 12:14:32.126000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:14:32.150743 kernel: audit: type=1334 audit(1765887272.126:321): prog-id=69 op=UNLOAD Dec 16 12:14:32.130000 audit: BPF prog-id=90 op=LOAD Dec 16 12:14:32.144000 audit: BPF prog-id=91 op=LOAD Dec 16 12:14:32.144000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:14:32.144000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:14:32.149000 audit: BPF prog-id=92 op=LOAD Dec 16 12:14:32.149000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:14:32.150000 audit: BPF prog-id=93 op=LOAD Dec 16 12:14:32.150000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:14:32.150000 audit: BPF prog-id=94 op=LOAD Dec 16 12:14:32.151000 audit: BPF prog-id=95 op=LOAD Dec 16 12:14:32.151000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:14:32.151000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:14:32.151000 audit: BPF prog-id=96 op=LOAD Dec 16 12:14:32.152000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:14:32.152000 audit: BPF prog-id=97 op=LOAD Dec 16 12:14:32.152000 audit: BPF prog-id=98 op=LOAD Dec 16 12:14:32.152000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:14:32.152000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:14:32.153000 audit: BPF prog-id=99 op=LOAD Dec 16 12:14:32.153000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:14:32.153000 audit: BPF prog-id=100 op=LOAD Dec 16 12:14:32.153000 audit: BPF prog-id=101 op=LOAD Dec 16 12:14:32.153000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:14:32.153000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:14:32.154000 audit: BPF prog-id=102 op=LOAD Dec 16 12:14:32.154000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:14:32.154000 audit: BPF prog-id=103 op=LOAD Dec 16 12:14:32.154000 audit: BPF prog-id=104 op=LOAD Dec 16 12:14:32.154000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:14:32.154000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:14:32.155000 audit: BPF prog-id=105 op=LOAD Dec 16 12:14:32.155000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:14:32.156000 audit: BPF prog-id=106 op=LOAD Dec 16 12:14:32.156000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:14:32.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:32.170291 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:32.173428 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:14:32.173688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:32.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:32.173751 systemd[1]: kubelet.service: Consumed 82ms CPU time, 95.2M memory peak. Dec 16 12:14:32.175383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:32.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:32.374665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:32.383745 (kubelet)[3309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:14:32.497904 kubelet[3309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:14:32.497904 kubelet[3309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:14:32.497904 kubelet[3309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:14:32.498252 kubelet[3309]: I1216 12:14:32.497954 3309 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:14:33.023450 kubelet[3309]: I1216 12:14:33.023411 3309 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:14:33.023450 kubelet[3309]: I1216 12:14:33.023443 3309 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:14:33.023903 kubelet[3309]: I1216 12:14:33.023881 3309 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:14:33.046415 kubelet[3309]: E1216 12:14:33.046374 3309 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:14:33.047194 kubelet[3309]: I1216 12:14:33.047097 3309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:14:33.053309 kubelet[3309]: I1216 12:14:33.053289 3309 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:14:33.055939 kubelet[3309]: I1216 12:14:33.055866 3309 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:14:33.056150 kubelet[3309]: I1216 12:14:33.056127 3309 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:14:33.056333 kubelet[3309]: I1216 12:14:33.056217 3309 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-4d45b340a5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:14:33.056460 kubelet[3309]: I1216 12:14:33.056448 3309 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:14:33.056530 kubelet[3309]: I1216 12:14:33.056522 3309 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:14:33.057361 kubelet[3309]: I1216 12:14:33.057305 3309 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:14:33.060073 kubelet[3309]: I1216 12:14:33.059988 3309 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:14:33.060073 kubelet[3309]: I1216 12:14:33.060012 3309 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:14:33.061043 kubelet[3309]: I1216 12:14:33.060969 3309 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:14:33.062079 kubelet[3309]: I1216 12:14:33.062033 3309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:14:33.065455 kubelet[3309]: E1216 12:14:33.065075 3309 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-4d45b340a5&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:14:33.065455 kubelet[3309]: E1216 12:14:33.065380 3309 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:14:33.065726 kubelet[3309]: I1216 12:14:33.065703 3309 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:14:33.066859 kubelet[3309]: I1216 12:14:33.066104 3309 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:14:33.066859 kubelet[3309]: W1216 12:14:33.066154 3309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:14:33.070361 kubelet[3309]: I1216 12:14:33.070340 3309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:14:33.070418 kubelet[3309]: I1216 12:14:33.070382 3309 server.go:1289] "Started kubelet" Dec 16 12:14:33.072032 kubelet[3309]: I1216 12:14:33.071968 3309 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:14:33.072996 kubelet[3309]: I1216 12:14:33.072964 3309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:14:33.074780 kubelet[3309]: E1216 12:14:33.073525 3309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-a-4d45b340a5.1881b11e68be0061 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-4d45b340a5,UID:ci-4547.0.0-a-4d45b340a5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-4d45b340a5,},FirstTimestamp:2025-12-16 12:14:33.070362721 +0000 UTC m=+0.683099748,LastTimestamp:2025-12-16 12:14:33.070362721 +0000 UTC m=+0.683099748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-4d45b340a5,}" Dec 16 12:14:33.076177 kubelet[3309]: I1216 12:14:33.076141 3309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:14:33.076458 kubelet[3309]: I1216 12:14:33.076442 3309 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:14:33.077000 audit[3325]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:33.077000 audit[3325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffefe873c0 a2=0 a3=0 items=0 ppid=3309 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:14:33.078960 kubelet[3309]: I1216 12:14:33.078744 3309 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:14:33.077000 audit[3326]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:33.077000 audit[3326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5aaf1d0 a2=0 a3=0 items=0 ppid=3309 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:14:33.080002 kubelet[3309]: I1216 12:14:33.079651 3309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:14:33.080191 kubelet[3309]: I1216 12:14:33.080162 3309 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:14:33.080336 kubelet[3309]: E1216 12:14:33.080312 3309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-4d45b340a5\" not found" Dec 16 12:14:33.081063 kubelet[3309]: E1216 12:14:33.081034 3309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-4d45b340a5?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Dec 16 12:14:33.081382 kubelet[3309]: I1216 12:14:33.081363 3309 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:14:33.080000 audit[3328]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:33.080000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffeb4c3a10 a2=0 a3=0 items=0 ppid=3309 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:14:33.082839 kubelet[3309]: E1216 12:14:33.082826 3309 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:14:33.083024 kubelet[3309]: I1216 12:14:33.083011 3309 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:14:33.083095 kubelet[3309]: I1216 12:14:33.083088 3309 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:14:33.083151 kubelet[3309]: I1216 12:14:33.083129 3309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:14:33.083190 kubelet[3309]: I1216 12:14:33.083180 3309 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:14:33.082000 audit[3330]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:33.082000 audit[3330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffff4c3f0 a2=0 a3=0 items=0 ppid=3309 pid=3330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.082000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:14:33.087000 audit[3333]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:33.087000 audit[3333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc8ce7c00 a2=0 a3=0 items=0 ppid=3309 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:14:33.089295 kubelet[3309]: I1216 12:14:33.089255 3309 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:14:33.088000 audit[3334]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3334 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:33.088000 audit[3334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe96b1d20 a2=0 a3=0 items=0 ppid=3309 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:14:33.090223 kubelet[3309]: I1216 12:14:33.090201 3309 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:14:33.090223 kubelet[3309]: I1216 12:14:33.090219 3309 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:14:33.090267 kubelet[3309]: I1216 12:14:33.090235 3309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:14:33.090267 kubelet[3309]: I1216 12:14:33.090241 3309 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:14:33.090292 kubelet[3309]: E1216 12:14:33.090272 3309 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:14:33.089000 audit[3335]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:33.089000 audit[3335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc194470 a2=0 a3=0 items=0 ppid=3309 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:14:33.090000 audit[3336]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:33.090000 audit[3336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdbc7d6b0 a2=0 a3=0 items=0 ppid=3309 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:14:33.091000 audit[3337]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:33.091000 audit[3337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff1668790 a2=0 a3=0 items=0 ppid=3309 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:14:33.092000 audit[3338]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:33.092000 audit[3338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcde94e30 a2=0 a3=0 items=0 ppid=3309 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:14:33.093000 audit[3339]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3339 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:33.093000 audit[3339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9581fb0 a2=0 a3=0 items=0 ppid=3309 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.093000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:14:33.093000 audit[3340]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3340 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:33.093000 audit[3340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa541440 a2=0 a3=0 items=0 ppid=3309 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.093000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:14:33.096987 kubelet[3309]: E1216 12:14:33.096894 3309 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:14:33.096987 kubelet[3309]: E1216 12:14:33.096974 3309 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:14:33.103008 kubelet[3309]: I1216 12:14:33.102987 3309 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:14:33.103008 kubelet[3309]: I1216 12:14:33.103002 3309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:14:33.103142 kubelet[3309]: I1216 12:14:33.103065 3309 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:14:33.108861 kubelet[3309]: I1216 12:14:33.108837 3309 policy_none.go:49] "None policy: Start" Dec 16 12:14:33.108861 kubelet[3309]: I1216 12:14:33.108863 3309 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:14:33.108931 kubelet[3309]: I1216 12:14:33.108872 3309 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:14:33.116813 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:14:33.130021 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:14:33.132871 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:14:33.151320 kubelet[3309]: E1216 12:14:33.151293 3309 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:14:33.151648 kubelet[3309]: I1216 12:14:33.151626 3309 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:14:33.151749 kubelet[3309]: I1216 12:14:33.151717 3309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:14:33.152074 kubelet[3309]: I1216 12:14:33.152059 3309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:14:33.153218 kubelet[3309]: E1216 12:14:33.153183 3309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:14:33.153538 kubelet[3309]: E1216 12:14:33.153526 3309 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-a-4d45b340a5\" not found" Dec 16 12:14:33.203931 systemd[1]: Created slice kubepods-burstable-pod614688d5d7de78748c9771bda2fac7f8.slice - libcontainer container kubepods-burstable-pod614688d5d7de78748c9771bda2fac7f8.slice. Dec 16 12:14:33.213349 kubelet[3309]: E1216 12:14:33.213164 3309 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-4d45b340a5\" not found" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.218103 systemd[1]: Created slice kubepods-burstable-pode502ff4b9b6ed57c2554c51d2cfca428.slice - libcontainer container kubepods-burstable-pode502ff4b9b6ed57c2554c51d2cfca428.slice. Dec 16 12:14:33.220190 kubelet[3309]: E1216 12:14:33.220089 3309 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-4d45b340a5\" not found" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.222348 systemd[1]: Created slice kubepods-burstable-pod637e5e90adfdfbbaec97e8ff7dec3e11.slice - libcontainer container kubepods-burstable-pod637e5e90adfdfbbaec97e8ff7dec3e11.slice. Dec 16 12:14:33.225499 kubelet[3309]: E1216 12:14:33.224250 3309 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-4d45b340a5\" not found" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.253446 kubelet[3309]: I1216 12:14:33.253417 3309 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.253778 kubelet[3309]: E1216 12:14:33.253746 3309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.282369 kubelet[3309]: E1216 12:14:33.282274 3309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-4d45b340a5?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Dec 16 12:14:33.284547 kubelet[3309]: I1216 12:14:33.284495 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/614688d5d7de78748c9771bda2fac7f8-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-4d45b340a5\" (UID: \"614688d5d7de78748c9771bda2fac7f8\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.284547 kubelet[3309]: I1216 12:14:33.284518 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/614688d5d7de78748c9771bda2fac7f8-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-4d45b340a5\" (UID: \"614688d5d7de78748c9771bda2fac7f8\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.284662 kubelet[3309]: I1216 12:14:33.284567 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.284662 kubelet[3309]: I1216 12:14:33.284602 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.284662 kubelet[3309]: I1216 12:14:33.284615 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.284662 kubelet[3309]: I1216 12:14:33.284627 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.284662 kubelet[3309]: I1216 12:14:33.284644 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/614688d5d7de78748c9771bda2fac7f8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-4d45b340a5\" (UID: \"614688d5d7de78748c9771bda2fac7f8\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.284740 kubelet[3309]: I1216 12:14:33.284654 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.284740 kubelet[3309]: I1216 12:14:33.284664 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/637e5e90adfdfbbaec97e8ff7dec3e11-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-4d45b340a5\" (UID: \"637e5e90adfdfbbaec97e8ff7dec3e11\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.456734 kubelet[3309]: I1216 12:14:33.456669 3309 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.457203 kubelet[3309]: E1216 12:14:33.457160 3309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.515134 containerd[2156]: time="2025-12-16T12:14:33.515045430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-4d45b340a5,Uid:614688d5d7de78748c9771bda2fac7f8,Namespace:kube-system,Attempt:0,}" Dec 16 12:14:33.521887 containerd[2156]: time="2025-12-16T12:14:33.521740025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-4d45b340a5,Uid:e502ff4b9b6ed57c2554c51d2cfca428,Namespace:kube-system,Attempt:0,}" Dec 16 12:14:33.525602 containerd[2156]: time="2025-12-16T12:14:33.525571949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-4d45b340a5,Uid:637e5e90adfdfbbaec97e8ff7dec3e11,Namespace:kube-system,Attempt:0,}" Dec 16 12:14:33.592100 containerd[2156]: time="2025-12-16T12:14:33.592047814Z" level=info msg="connecting to shim 8f66e67dcb7931f2680dafad4f33e3d5f777fb36aa3a05f71c7a375ada0b8231" address="unix:///run/containerd/s/a745538318ac650f0a96d38a3f8e17e8e34250be1bed7e7d20c0b7a0fd5fa818" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:14:33.617688 systemd[1]: Started cri-containerd-8f66e67dcb7931f2680dafad4f33e3d5f777fb36aa3a05f71c7a375ada0b8231.scope - libcontainer container 8f66e67dcb7931f2680dafad4f33e3d5f777fb36aa3a05f71c7a375ada0b8231. Dec 16 12:14:33.620188 containerd[2156]: time="2025-12-16T12:14:33.620144860Z" level=info msg="connecting to shim 20f7fdfa7b1eb99ae44d2ccdc2defbd95968733d51489a6b22132dc8512b980b" address="unix:///run/containerd/s/c6cd661b9d9da49104542294b111154b2b7b0c3bad2d35e8f10bb799517fdb01" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:14:33.622823 containerd[2156]: time="2025-12-16T12:14:33.622794862Z" level=info msg="connecting to shim 7f3ccfe060a8e87d36ac6baa1f060ef74bdfc4426fc39b5ada440dd9252244a3" address="unix:///run/containerd/s/0e660636e76a5ff5f0b1c09420ef4240b95966945ad5429299d9eb314dc08cad" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:14:33.637000 audit: BPF prog-id=107 op=LOAD Dec 16 12:14:33.637000 audit: BPF prog-id=108 op=LOAD Dec 16 12:14:33.637000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3351 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363665363764636237393331663236383064616661643466333365 Dec 16 12:14:33.638000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:14:33.638000 audit[3363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363665363764636237393331663236383064616661643466333365 Dec 16 12:14:33.638000 audit: BPF prog-id=109 op=LOAD Dec 16 12:14:33.638000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3351 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363665363764636237393331663236383064616661643466333365 Dec 16 12:14:33.638000 audit: BPF prog-id=110 op=LOAD Dec 16 12:14:33.638000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3351 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363665363764636237393331663236383064616661643466333365 Dec 16 12:14:33.638000 audit: BPF prog-id=110 op=UNLOAD Dec 16 12:14:33.638000 audit[3363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363665363764636237393331663236383064616661643466333365 Dec 16 12:14:33.638000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:14:33.638000 audit[3363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363665363764636237393331663236383064616661643466333365 Dec 16 12:14:33.638000 audit: BPF prog-id=111 op=LOAD Dec 16 12:14:33.638000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3351 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363665363764636237393331663236383064616661643466333365 Dec 16 12:14:33.652681 systemd[1]: Started cri-containerd-20f7fdfa7b1eb99ae44d2ccdc2defbd95968733d51489a6b22132dc8512b980b.scope - libcontainer container 20f7fdfa7b1eb99ae44d2ccdc2defbd95968733d51489a6b22132dc8512b980b. Dec 16 12:14:33.656195 systemd[1]: Started cri-containerd-7f3ccfe060a8e87d36ac6baa1f060ef74bdfc4426fc39b5ada440dd9252244a3.scope - libcontainer container 7f3ccfe060a8e87d36ac6baa1f060ef74bdfc4426fc39b5ada440dd9252244a3. Dec 16 12:14:33.675000 audit: BPF prog-id=112 op=LOAD Dec 16 12:14:33.676000 audit: BPF prog-id=113 op=LOAD Dec 16 12:14:33.676000 audit[3416]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3384 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663766646661376231656239396165343464326363646332646566 Dec 16 12:14:33.676000 audit: BPF prog-id=113 op=UNLOAD Dec 16 12:14:33.676000 audit[3416]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3384 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663766646661376231656239396165343464326363646332646566 Dec 16 12:14:33.676000 audit: BPF prog-id=114 op=LOAD Dec 16 12:14:33.676000 audit[3416]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3384 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663766646661376231656239396165343464326363646332646566 Dec 16 12:14:33.676000 audit: BPF prog-id=115 op=LOAD Dec 16 12:14:33.676000 audit[3416]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3384 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663766646661376231656239396165343464326363646332646566 Dec 16 12:14:33.676000 audit: BPF prog-id=115 op=UNLOAD Dec 16 12:14:33.676000 audit[3416]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3384 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663766646661376231656239396165343464326363646332646566 Dec 16 12:14:33.676000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:14:33.676000 audit[3416]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3384 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663766646661376231656239396165343464326363646332646566 Dec 16 12:14:33.677000 audit: BPF prog-id=116 op=LOAD Dec 16 12:14:33.677000 audit[3416]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3384 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663766646661376231656239396165343464326363646332646566 Dec 16 12:14:33.682328 containerd[2156]: time="2025-12-16T12:14:33.682192020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-4d45b340a5,Uid:614688d5d7de78748c9771bda2fac7f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f66e67dcb7931f2680dafad4f33e3d5f777fb36aa3a05f71c7a375ada0b8231\"" Dec 16 12:14:33.682000 audit: BPF prog-id=117 op=LOAD Dec 16 12:14:33.683879 kubelet[3309]: E1216 12:14:33.683849 3309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-4d45b340a5?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Dec 16 12:14:33.685000 audit: BPF prog-id=118 op=LOAD Dec 16 12:14:33.685000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3395 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336363666530363061386538376433366163366261613166303630 Dec 16 12:14:33.685000 audit: BPF prog-id=118 op=UNLOAD Dec 16 12:14:33.685000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3395 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336363666530363061386538376433366163366261613166303630 Dec 16 12:14:33.685000 audit: BPF prog-id=119 op=LOAD Dec 16 12:14:33.685000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3395 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336363666530363061386538376433366163366261613166303630 Dec 16 12:14:33.687000 audit: BPF prog-id=120 op=LOAD Dec 16 12:14:33.687000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3395 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336363666530363061386538376433366163366261613166303630 Dec 16 12:14:33.688000 audit: BPF prog-id=120 op=UNLOAD Dec 16 12:14:33.688000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3395 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336363666530363061386538376433366163366261613166303630 Dec 16 12:14:33.688000 audit: BPF prog-id=119 op=UNLOAD Dec 16 12:14:33.688000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3395 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336363666530363061386538376433366163366261613166303630 Dec 16 12:14:33.688000 audit: BPF prog-id=121 op=LOAD Dec 16 12:14:33.688000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3395 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336363666530363061386538376433366163366261613166303630 Dec 16 12:14:33.699482 containerd[2156]: time="2025-12-16T12:14:33.697838108Z" level=info msg="CreateContainer within sandbox \"8f66e67dcb7931f2680dafad4f33e3d5f777fb36aa3a05f71c7a375ada0b8231\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:14:33.716346 containerd[2156]: time="2025-12-16T12:14:33.716306079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-4d45b340a5,Uid:e502ff4b9b6ed57c2554c51d2cfca428,Namespace:kube-system,Attempt:0,} returns sandbox id \"20f7fdfa7b1eb99ae44d2ccdc2defbd95968733d51489a6b22132dc8512b980b\"" Dec 16 12:14:33.724465 containerd[2156]: time="2025-12-16T12:14:33.724383281Z" level=info msg="Container 2a4da196b1e19cfecd8aa978bb31c25652b4b4e7aeda55a553746495a81aca92: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:14:33.725767 containerd[2156]: time="2025-12-16T12:14:33.725495292Z" level=info msg="CreateContainer within sandbox \"20f7fdfa7b1eb99ae44d2ccdc2defbd95968733d51489a6b22132dc8512b980b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:14:33.727433 containerd[2156]: time="2025-12-16T12:14:33.727404265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-4d45b340a5,Uid:637e5e90adfdfbbaec97e8ff7dec3e11,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f3ccfe060a8e87d36ac6baa1f060ef74bdfc4426fc39b5ada440dd9252244a3\"" Dec 16 12:14:33.746038 containerd[2156]: time="2025-12-16T12:14:33.745959621Z" level=info msg="CreateContainer within sandbox \"8f66e67dcb7931f2680dafad4f33e3d5f777fb36aa3a05f71c7a375ada0b8231\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2a4da196b1e19cfecd8aa978bb31c25652b4b4e7aeda55a553746495a81aca92\"" Dec 16 12:14:33.746800 containerd[2156]: time="2025-12-16T12:14:33.746774906Z" level=info msg="StartContainer for \"2a4da196b1e19cfecd8aa978bb31c25652b4b4e7aeda55a553746495a81aca92\"" Dec 16 12:14:33.747577 containerd[2156]: time="2025-12-16T12:14:33.747556026Z" level=info msg="connecting to shim 2a4da196b1e19cfecd8aa978bb31c25652b4b4e7aeda55a553746495a81aca92" address="unix:///run/containerd/s/a745538318ac650f0a96d38a3f8e17e8e34250be1bed7e7d20c0b7a0fd5fa818" protocol=ttrpc version=3 Dec 16 12:14:33.752835 containerd[2156]: time="2025-12-16T12:14:33.752800952Z" level=info msg="CreateContainer within sandbox \"7f3ccfe060a8e87d36ac6baa1f060ef74bdfc4426fc39b5ada440dd9252244a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:14:33.761698 containerd[2156]: time="2025-12-16T12:14:33.761670748Z" level=info msg="Container bd7b4e6ec7153a3d8ceb6a89e57ccfbcd4b677213fd49505a2e354d8588e2f62: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:14:33.766649 systemd[1]: Started cri-containerd-2a4da196b1e19cfecd8aa978bb31c25652b4b4e7aeda55a553746495a81aca92.scope - libcontainer container 2a4da196b1e19cfecd8aa978bb31c25652b4b4e7aeda55a553746495a81aca92. Dec 16 12:14:33.774000 audit: BPF prog-id=122 op=LOAD Dec 16 12:14:33.775000 audit: BPF prog-id=123 op=LOAD Dec 16 12:14:33.775000 audit[3483]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3351 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346461313936623165313963666563643861613937386262333163 Dec 16 12:14:33.775000 audit: BPF prog-id=123 op=UNLOAD Dec 16 12:14:33.775000 audit[3483]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346461313936623165313963666563643861613937386262333163 Dec 16 12:14:33.775000 audit: BPF prog-id=124 op=LOAD Dec 16 12:14:33.775000 audit[3483]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3351 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346461313936623165313963666563643861613937386262333163 Dec 16 12:14:33.775000 audit: BPF prog-id=125 op=LOAD Dec 16 12:14:33.775000 audit[3483]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3351 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346461313936623165313963666563643861613937386262333163 Dec 16 12:14:33.775000 audit: BPF prog-id=125 op=UNLOAD Dec 16 12:14:33.775000 audit[3483]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346461313936623165313963666563643861613937386262333163 Dec 16 12:14:33.775000 audit: BPF prog-id=124 op=UNLOAD Dec 16 12:14:33.775000 audit[3483]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3351 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346461313936623165313963666563643861613937386262333163 Dec 16 12:14:33.775000 audit: BPF prog-id=126 op=LOAD Dec 16 12:14:33.775000 audit[3483]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3351 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261346461313936623165313963666563643861613937386262333163 Dec 16 12:14:33.779884 containerd[2156]: time="2025-12-16T12:14:33.779855762Z" level=info msg="CreateContainer within sandbox \"20f7fdfa7b1eb99ae44d2ccdc2defbd95968733d51489a6b22132dc8512b980b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bd7b4e6ec7153a3d8ceb6a89e57ccfbcd4b677213fd49505a2e354d8588e2f62\"" Dec 16 12:14:33.780708 containerd[2156]: time="2025-12-16T12:14:33.780579901Z" level=info msg="StartContainer for \"bd7b4e6ec7153a3d8ceb6a89e57ccfbcd4b677213fd49505a2e354d8588e2f62\"" Dec 16 12:14:33.781966 containerd[2156]: time="2025-12-16T12:14:33.781926280Z" level=info msg="connecting to shim bd7b4e6ec7153a3d8ceb6a89e57ccfbcd4b677213fd49505a2e354d8588e2f62" address="unix:///run/containerd/s/c6cd661b9d9da49104542294b111154b2b7b0c3bad2d35e8f10bb799517fdb01" protocol=ttrpc version=3 Dec 16 12:14:33.790414 containerd[2156]: time="2025-12-16T12:14:33.790374992Z" level=info msg="Container 375ca37fdb4e2cc63cd5fffbcb6b8813bd9dc565663580a9434e4d4ca340f7a4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:14:33.805593 containerd[2156]: time="2025-12-16T12:14:33.804536551Z" level=info msg="StartContainer for \"2a4da196b1e19cfecd8aa978bb31c25652b4b4e7aeda55a553746495a81aca92\" returns successfully" Dec 16 12:14:33.807657 systemd[1]: Started cri-containerd-bd7b4e6ec7153a3d8ceb6a89e57ccfbcd4b677213fd49505a2e354d8588e2f62.scope - libcontainer container bd7b4e6ec7153a3d8ceb6a89e57ccfbcd4b677213fd49505a2e354d8588e2f62. Dec 16 12:14:33.810911 containerd[2156]: time="2025-12-16T12:14:33.810881478Z" level=info msg="CreateContainer within sandbox \"7f3ccfe060a8e87d36ac6baa1f060ef74bdfc4426fc39b5ada440dd9252244a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"375ca37fdb4e2cc63cd5fffbcb6b8813bd9dc565663580a9434e4d4ca340f7a4\"" Dec 16 12:14:33.811968 containerd[2156]: time="2025-12-16T12:14:33.811638949Z" level=info msg="StartContainer for \"375ca37fdb4e2cc63cd5fffbcb6b8813bd9dc565663580a9434e4d4ca340f7a4\"" Dec 16 12:14:33.812767 containerd[2156]: time="2025-12-16T12:14:33.812745455Z" level=info msg="connecting to shim 375ca37fdb4e2cc63cd5fffbcb6b8813bd9dc565663580a9434e4d4ca340f7a4" address="unix:///run/containerd/s/0e660636e76a5ff5f0b1c09420ef4240b95966945ad5429299d9eb314dc08cad" protocol=ttrpc version=3 Dec 16 12:14:33.828000 audit: BPF prog-id=127 op=LOAD Dec 16 12:14:33.829000 audit: BPF prog-id=128 op=LOAD Dec 16 12:14:33.829000 audit[3506]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3384 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376234653665633731353361336438636562366138396535376363 Dec 16 12:14:33.829000 audit: BPF prog-id=128 op=UNLOAD Dec 16 12:14:33.829000 audit[3506]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3384 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376234653665633731353361336438636562366138396535376363 Dec 16 12:14:33.829000 audit: BPF prog-id=129 op=LOAD Dec 16 12:14:33.829000 audit[3506]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3384 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376234653665633731353361336438636562366138396535376363 Dec 16 12:14:33.829000 audit: BPF prog-id=130 op=LOAD Dec 16 12:14:33.829000 audit[3506]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3384 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376234653665633731353361336438636562366138396535376363 Dec 16 12:14:33.830000 audit: BPF prog-id=130 op=UNLOAD Dec 16 12:14:33.830000 audit[3506]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3384 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376234653665633731353361336438636562366138396535376363 Dec 16 12:14:33.830000 audit: BPF prog-id=129 op=UNLOAD Dec 16 12:14:33.830000 audit[3506]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3384 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376234653665633731353361336438636562366138396535376363 Dec 16 12:14:33.830000 audit: BPF prog-id=131 op=LOAD Dec 16 12:14:33.830000 audit[3506]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3384 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264376234653665633731353361336438636562366138396535376363 Dec 16 12:14:33.836672 systemd[1]: Started cri-containerd-375ca37fdb4e2cc63cd5fffbcb6b8813bd9dc565663580a9434e4d4ca340f7a4.scope - libcontainer container 375ca37fdb4e2cc63cd5fffbcb6b8813bd9dc565663580a9434e4d4ca340f7a4. Dec 16 12:14:33.853000 audit: BPF prog-id=132 op=LOAD Dec 16 12:14:33.854000 audit: BPF prog-id=133 op=LOAD Dec 16 12:14:33.854000 audit[3536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3395 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356361333766646234653263633633636435666666626362366238 Dec 16 12:14:33.854000 audit: BPF prog-id=133 op=UNLOAD Dec 16 12:14:33.854000 audit[3536]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3395 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356361333766646234653263633633636435666666626362366238 Dec 16 12:14:33.854000 audit: BPF prog-id=134 op=LOAD Dec 16 12:14:33.854000 audit[3536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3395 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356361333766646234653263633633636435666666626362366238 Dec 16 12:14:33.854000 audit: BPF prog-id=135 op=LOAD Dec 16 12:14:33.854000 audit[3536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3395 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356361333766646234653263633633636435666666626362366238 Dec 16 12:14:33.854000 audit: BPF prog-id=135 op=UNLOAD Dec 16 12:14:33.854000 audit[3536]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3395 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356361333766646234653263633633636435666666626362366238 Dec 16 12:14:33.854000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:14:33.854000 audit[3536]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3395 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356361333766646234653263633633636435666666626362366238 Dec 16 12:14:33.854000 audit: BPF prog-id=136 op=LOAD Dec 16 12:14:33.854000 audit[3536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3395 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:33.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337356361333766646234653263633633636435666666626362366238 Dec 16 12:14:33.860146 kubelet[3309]: I1216 12:14:33.860084 3309 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:33.876428 containerd[2156]: time="2025-12-16T12:14:33.876388868Z" level=info msg="StartContainer for \"bd7b4e6ec7153a3d8ceb6a89e57ccfbcd4b677213fd49505a2e354d8588e2f62\" returns successfully" Dec 16 12:14:33.888430 containerd[2156]: time="2025-12-16T12:14:33.888390931Z" level=info msg="StartContainer for \"375ca37fdb4e2cc63cd5fffbcb6b8813bd9dc565663580a9434e4d4ca340f7a4\" returns successfully" Dec 16 12:14:34.107373 kubelet[3309]: E1216 12:14:34.106779 3309 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-4d45b340a5\" not found" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:34.111961 kubelet[3309]: E1216 12:14:34.111862 3309 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-4d45b340a5\" not found" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:34.112973 kubelet[3309]: E1216 12:14:34.112955 3309 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-4d45b340a5\" not found" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.114432 kubelet[3309]: E1216 12:14:35.114279 3309 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-4d45b340a5\" not found" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.116901 kubelet[3309]: E1216 12:14:35.116782 3309 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-4d45b340a5\" not found" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.567628 kubelet[3309]: E1216 12:14:35.567250 3309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-a-4d45b340a5\" not found" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.627143 kubelet[3309]: I1216 12:14:35.626966 3309 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.627143 kubelet[3309]: E1216 12:14:35.627006 3309 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547.0.0-a-4d45b340a5\": node \"ci-4547.0.0-a-4d45b340a5\" not found" Dec 16 12:14:35.681512 kubelet[3309]: I1216 12:14:35.680978 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.707128 kubelet[3309]: E1216 12:14:35.707086 3309 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-4d45b340a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.707363 kubelet[3309]: I1216 12:14:35.707299 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.709729 kubelet[3309]: E1216 12:14:35.709704 3309 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-4d45b340a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.709890 kubelet[3309]: I1216 12:14:35.709826 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:35.713544 kubelet[3309]: E1216 12:14:35.712665 3309 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:36.001614 kubelet[3309]: I1216 12:14:36.001537 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:36.004020 kubelet[3309]: E1216 12:14:36.003952 3309 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:36.066577 kubelet[3309]: I1216 12:14:36.066323 3309 apiserver.go:52] "Watching apiserver" Dec 16 12:14:36.083505 kubelet[3309]: I1216 12:14:36.083463 3309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:14:36.297220 kubelet[3309]: I1216 12:14:36.296696 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:36.299161 kubelet[3309]: E1216 12:14:36.299135 3309 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-4d45b340a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:37.926580 systemd[1]: Reload requested from client PID 3588 ('systemctl') (unit session-10.scope)... Dec 16 12:14:37.926594 systemd[1]: Reloading... Dec 16 12:14:38.004506 zram_generator::config[3641]: No configuration found. Dec 16 12:14:38.174457 systemd[1]: Reloading finished in 247 ms. Dec 16 12:14:38.200546 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:38.214311 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:14:38.214625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:38.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:38.214692 systemd[1]: kubelet.service: Consumed 857ms CPU time, 125.3M memory peak. Dec 16 12:14:38.230566 kernel: kauditd_printk_skb: 205 callbacks suppressed Dec 16 12:14:38.230665 kernel: audit: type=1131 audit(1765887278.214:419): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:38.217674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:14:38.217000 audit: BPF prog-id=137 op=LOAD Dec 16 12:14:38.235353 kernel: audit: type=1334 audit(1765887278.217:420): prog-id=137 op=LOAD Dec 16 12:14:38.217000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:14:38.240645 kernel: audit: type=1334 audit(1765887278.217:421): prog-id=102 op=UNLOAD Dec 16 12:14:38.240715 kernel: audit: type=1334 audit(1765887278.234:422): prog-id=138 op=LOAD Dec 16 12:14:38.234000 audit: BPF prog-id=138 op=LOAD Dec 16 12:14:38.244000 audit: BPF prog-id=139 op=LOAD Dec 16 12:14:38.249284 kernel: audit: type=1334 audit(1765887278.244:423): prog-id=139 op=LOAD Dec 16 12:14:38.249346 kernel: audit: type=1334 audit(1765887278.244:424): prog-id=103 op=UNLOAD Dec 16 12:14:38.244000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:14:38.244000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:14:38.258703 kernel: audit: type=1334 audit(1765887278.244:425): prog-id=104 op=UNLOAD Dec 16 12:14:38.252000 audit: BPF prog-id=140 op=LOAD Dec 16 12:14:38.263203 kernel: audit: type=1334 audit(1765887278.252:426): prog-id=140 op=LOAD Dec 16 12:14:38.252000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:14:38.267512 kernel: audit: type=1334 audit(1765887278.252:427): prog-id=92 op=UNLOAD Dec 16 12:14:38.253000 audit: BPF prog-id=141 op=LOAD Dec 16 12:14:38.272308 kernel: audit: type=1334 audit(1765887278.253:428): prog-id=141 op=LOAD Dec 16 12:14:38.253000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:14:38.258000 audit: BPF prog-id=142 op=LOAD Dec 16 12:14:38.258000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:14:38.262000 audit: BPF prog-id=143 op=LOAD Dec 16 12:14:38.267000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:14:38.272000 audit: BPF prog-id=144 op=LOAD Dec 16 12:14:38.272000 audit: BPF prog-id=145 op=LOAD Dec 16 12:14:38.272000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:14:38.272000 audit: BPF prog-id=95 op=UNLOAD Dec 16 12:14:38.273000 audit: BPF prog-id=146 op=LOAD Dec 16 12:14:38.273000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:14:38.273000 audit: BPF prog-id=147 op=LOAD Dec 16 12:14:38.273000 audit: BPF prog-id=148 op=LOAD Dec 16 12:14:38.273000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:14:38.273000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:14:38.273000 audit: BPF prog-id=149 op=LOAD Dec 16 12:14:38.273000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:14:38.273000 audit: BPF prog-id=150 op=LOAD Dec 16 12:14:38.274000 audit: BPF prog-id=151 op=LOAD Dec 16 12:14:38.274000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:14:38.274000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:14:38.274000 audit: BPF prog-id=152 op=LOAD Dec 16 12:14:38.275000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:14:38.275000 audit: BPF prog-id=153 op=LOAD Dec 16 12:14:38.275000 audit: BPF prog-id=154 op=LOAD Dec 16 12:14:38.275000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:14:38.275000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:14:38.275000 audit: BPF prog-id=155 op=LOAD Dec 16 12:14:38.275000 audit: BPF prog-id=156 op=LOAD Dec 16 12:14:38.275000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:14:38.275000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:14:38.382799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:14:38.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:38.389743 (kubelet)[3702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:14:38.418550 kubelet[3702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:14:38.418550 kubelet[3702]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:14:38.418550 kubelet[3702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:14:38.418550 kubelet[3702]: I1216 12:14:38.418144 3702 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:14:38.424513 kubelet[3702]: I1216 12:14:38.424000 3702 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:14:38.424513 kubelet[3702]: I1216 12:14:38.424028 3702 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:14:38.424513 kubelet[3702]: I1216 12:14:38.424193 3702 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:14:38.425318 kubelet[3702]: I1216 12:14:38.425304 3702 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:14:38.427375 kubelet[3702]: I1216 12:14:38.427362 3702 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:14:38.431619 kubelet[3702]: I1216 12:14:38.431606 3702 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:14:38.435175 kubelet[3702]: I1216 12:14:38.435159 3702 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:14:38.435443 kubelet[3702]: I1216 12:14:38.435420 3702 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:14:38.435793 kubelet[3702]: I1216 12:14:38.435525 3702 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-4d45b340a5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:14:38.435926 kubelet[3702]: I1216 12:14:38.435916 3702 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:14:38.435967 kubelet[3702]: I1216 12:14:38.435961 3702 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:14:38.436044 kubelet[3702]: I1216 12:14:38.436036 3702 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:14:38.436225 kubelet[3702]: I1216 12:14:38.436215 3702 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:14:38.436276 kubelet[3702]: I1216 12:14:38.436269 3702 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:14:38.436326 kubelet[3702]: I1216 12:14:38.436320 3702 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:14:38.436369 kubelet[3702]: I1216 12:14:38.436363 3702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:14:38.439751 kubelet[3702]: I1216 12:14:38.438648 3702 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:14:38.439751 kubelet[3702]: I1216 12:14:38.438992 3702 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:14:38.444129 kubelet[3702]: I1216 12:14:38.443880 3702 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:14:38.444352 kubelet[3702]: I1216 12:14:38.444233 3702 server.go:1289] "Started kubelet" Dec 16 12:14:38.446399 kubelet[3702]: I1216 12:14:38.446384 3702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:14:38.459136 kubelet[3702]: I1216 12:14:38.458403 3702 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:14:38.460674 kubelet[3702]: I1216 12:14:38.460639 3702 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:14:38.461364 kubelet[3702]: I1216 12:14:38.461311 3702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:14:38.461606 kubelet[3702]: I1216 12:14:38.461587 3702 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:14:38.462946 kubelet[3702]: I1216 12:14:38.462930 3702 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:14:38.463308 kubelet[3702]: I1216 12:14:38.463288 3702 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:14:38.463823 kubelet[3702]: I1216 12:14:38.463811 3702 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:14:38.465367 kubelet[3702]: E1216 12:14:38.460653 3702 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:14:38.465446 kubelet[3702]: I1216 12:14:38.465389 3702 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:14:38.466749 kubelet[3702]: I1216 12:14:38.466661 3702 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:14:38.471396 kubelet[3702]: I1216 12:14:38.471280 3702 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:14:38.471577 kubelet[3702]: I1216 12:14:38.471512 3702 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:14:38.471708 kubelet[3702]: I1216 12:14:38.471682 3702 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:14:38.480187 kubelet[3702]: I1216 12:14:38.480153 3702 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:14:38.480187 kubelet[3702]: I1216 12:14:38.480186 3702 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:14:38.480302 kubelet[3702]: I1216 12:14:38.480205 3702 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:14:38.480302 kubelet[3702]: I1216 12:14:38.480211 3702 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:14:38.480302 kubelet[3702]: E1216 12:14:38.480251 3702 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:14:38.517830 kubelet[3702]: I1216 12:14:38.517790 3702 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:14:38.517830 kubelet[3702]: I1216 12:14:38.517809 3702 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:14:38.517830 kubelet[3702]: I1216 12:14:38.517827 3702 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:14:38.518037 kubelet[3702]: I1216 12:14:38.517944 3702 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:14:38.518037 kubelet[3702]: I1216 12:14:38.517950 3702 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:14:38.518037 kubelet[3702]: I1216 12:14:38.517967 3702 policy_none.go:49] "None policy: Start" Dec 16 12:14:38.518037 kubelet[3702]: I1216 12:14:38.517975 3702 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:14:38.518037 kubelet[3702]: I1216 12:14:38.517982 3702 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:14:38.518127 kubelet[3702]: I1216 12:14:38.518123 3702 state_mem.go:75] "Updated machine memory state" Dec 16 12:14:38.527582 kubelet[3702]: E1216 12:14:38.527201 3702 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:14:38.527582 kubelet[3702]: I1216 12:14:38.527368 3702 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:14:38.527582 kubelet[3702]: I1216 12:14:38.527379 3702 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:14:38.528196 kubelet[3702]: I1216 12:14:38.528126 3702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:14:38.530093 kubelet[3702]: E1216 12:14:38.529633 3702 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:14:38.581959 kubelet[3702]: I1216 12:14:38.581920 3702 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.582307 kubelet[3702]: I1216 12:14:38.582282 3702 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.582612 kubelet[3702]: I1216 12:14:38.582594 3702 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.590437 kubelet[3702]: I1216 12:14:38.590412 3702 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:14:38.593691 kubelet[3702]: I1216 12:14:38.593642 3702 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:14:38.596217 kubelet[3702]: I1216 12:14:38.596114 3702 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:14:38.638578 kubelet[3702]: I1216 12:14:38.638500 3702 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.652709 kubelet[3702]: I1216 12:14:38.652378 3702 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.652709 kubelet[3702]: I1216 12:14:38.652467 3702 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.667154 kubelet[3702]: I1216 12:14:38.667118 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/614688d5d7de78748c9771bda2fac7f8-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-4d45b340a5\" (UID: \"614688d5d7de78748c9771bda2fac7f8\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.667154 kubelet[3702]: I1216 12:14:38.667152 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.667299 kubelet[3702]: I1216 12:14:38.667165 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.667299 kubelet[3702]: I1216 12:14:38.667177 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.667299 kubelet[3702]: I1216 12:14:38.667206 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/637e5e90adfdfbbaec97e8ff7dec3e11-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-4d45b340a5\" (UID: \"637e5e90adfdfbbaec97e8ff7dec3e11\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.667299 kubelet[3702]: I1216 12:14:38.667215 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/614688d5d7de78748c9771bda2fac7f8-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-4d45b340a5\" (UID: \"614688d5d7de78748c9771bda2fac7f8\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.667299 kubelet[3702]: I1216 12:14:38.667224 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/614688d5d7de78748c9771bda2fac7f8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-4d45b340a5\" (UID: \"614688d5d7de78748c9771bda2fac7f8\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.667388 kubelet[3702]: I1216 12:14:38.667234 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:38.667388 kubelet[3702]: I1216 12:14:38.667243 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e502ff4b9b6ed57c2554c51d2cfca428-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-4d45b340a5\" (UID: \"e502ff4b9b6ed57c2554c51d2cfca428\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:39.438193 kubelet[3702]: I1216 12:14:39.437996 3702 apiserver.go:52] "Watching apiserver" Dec 16 12:14:39.466509 kubelet[3702]: I1216 12:14:39.465993 3702 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:14:39.496871 kubelet[3702]: I1216 12:14:39.496748 3702 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:39.497501 kubelet[3702]: I1216 12:14:39.497416 3702 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:39.515990 kubelet[3702]: I1216 12:14:39.515922 3702 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:14:39.516290 kubelet[3702]: E1216 12:14:39.516145 3702 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-4d45b340a5\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:39.519490 kubelet[3702]: I1216 12:14:39.519387 3702 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:14:39.519490 kubelet[3702]: E1216 12:14:39.519431 3702 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-4d45b340a5\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-a-4d45b340a5" Dec 16 12:14:39.542889 kubelet[3702]: I1216 12:14:39.542529 3702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-a-4d45b340a5" podStartSLOduration=1.542513383 podStartE2EDuration="1.542513383s" podCreationTimestamp="2025-12-16 12:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:14:39.527689884 +0000 UTC m=+1.134307179" watchObservedRunningTime="2025-12-16 12:14:39.542513383 +0000 UTC m=+1.149130630" Dec 16 12:14:39.542889 kubelet[3702]: I1216 12:14:39.542621 3702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-4d45b340a5" podStartSLOduration=1.54261865 podStartE2EDuration="1.54261865s" podCreationTimestamp="2025-12-16 12:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:14:39.540247829 +0000 UTC m=+1.146865076" watchObservedRunningTime="2025-12-16 12:14:39.54261865 +0000 UTC m=+1.149235897" Dec 16 12:14:39.613370 kubelet[3702]: I1216 12:14:39.613316 3702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-a-4d45b340a5" podStartSLOduration=1.6133005649999999 podStartE2EDuration="1.613300565s" podCreationTimestamp="2025-12-16 12:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:14:39.612906397 +0000 UTC m=+1.219523676" watchObservedRunningTime="2025-12-16 12:14:39.613300565 +0000 UTC m=+1.219917812" Dec 16 12:14:43.288334 kubelet[3702]: I1216 12:14:43.288277 3702 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:14:43.289070 containerd[2156]: time="2025-12-16T12:14:43.288937648Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:14:43.289427 kubelet[3702]: I1216 12:14:43.289114 3702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:14:44.409189 systemd[1]: Created slice kubepods-besteffort-pod646357a4_39a4_4116_96c0_384b653d67cf.slice - libcontainer container kubepods-besteffort-pod646357a4_39a4_4116_96c0_384b653d67cf.slice. Dec 16 12:14:44.497210 kubelet[3702]: I1216 12:14:44.497135 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/646357a4-39a4-4116-96c0-384b653d67cf-kube-proxy\") pod \"kube-proxy-6fz6p\" (UID: \"646357a4-39a4-4116-96c0-384b653d67cf\") " pod="kube-system/kube-proxy-6fz6p" Dec 16 12:14:44.497210 kubelet[3702]: I1216 12:14:44.497166 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/646357a4-39a4-4116-96c0-384b653d67cf-xtables-lock\") pod \"kube-proxy-6fz6p\" (UID: \"646357a4-39a4-4116-96c0-384b653d67cf\") " pod="kube-system/kube-proxy-6fz6p" Dec 16 12:14:44.497210 kubelet[3702]: I1216 12:14:44.497181 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/646357a4-39a4-4116-96c0-384b653d67cf-lib-modules\") pod \"kube-proxy-6fz6p\" (UID: \"646357a4-39a4-4116-96c0-384b653d67cf\") " pod="kube-system/kube-proxy-6fz6p" Dec 16 12:14:44.497210 kubelet[3702]: I1216 12:14:44.497192 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ph8k\" (UniqueName: \"kubernetes.io/projected/646357a4-39a4-4116-96c0-384b653d67cf-kube-api-access-2ph8k\") pod \"kube-proxy-6fz6p\" (UID: \"646357a4-39a4-4116-96c0-384b653d67cf\") " pod="kube-system/kube-proxy-6fz6p" Dec 16 12:14:44.510132 systemd[1]: Created slice kubepods-besteffort-pod0eda8925_f461_49ea_935b_722ab33fbb89.slice - libcontainer container kubepods-besteffort-pod0eda8925_f461_49ea_935b_722ab33fbb89.slice. Dec 16 12:14:44.597861 kubelet[3702]: I1216 12:14:44.597452 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzzdc\" (UniqueName: \"kubernetes.io/projected/0eda8925-f461-49ea-935b-722ab33fbb89-kube-api-access-pzzdc\") pod \"tigera-operator-7dcd859c48-z2wwm\" (UID: \"0eda8925-f461-49ea-935b-722ab33fbb89\") " pod="tigera-operator/tigera-operator-7dcd859c48-z2wwm" Dec 16 12:14:44.597861 kubelet[3702]: I1216 12:14:44.597510 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0eda8925-f461-49ea-935b-722ab33fbb89-var-lib-calico\") pod \"tigera-operator-7dcd859c48-z2wwm\" (UID: \"0eda8925-f461-49ea-935b-722ab33fbb89\") " pod="tigera-operator/tigera-operator-7dcd859c48-z2wwm" Dec 16 12:14:44.720510 containerd[2156]: time="2025-12-16T12:14:44.720385128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6fz6p,Uid:646357a4-39a4-4116-96c0-384b653d67cf,Namespace:kube-system,Attempt:0,}" Dec 16 12:14:44.760563 containerd[2156]: time="2025-12-16T12:14:44.760485634Z" level=info msg="connecting to shim 5ad84307edec03e34e55232e349b7277fcd8af5475dd0bc30ecece9527be84ca" address="unix:///run/containerd/s/ed2a04829ef3395d656e87b2206f9acccb2fc8de91dc691039c0d4f37861d204" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:14:44.781660 systemd[1]: Started cri-containerd-5ad84307edec03e34e55232e349b7277fcd8af5475dd0bc30ecece9527be84ca.scope - libcontainer container 5ad84307edec03e34e55232e349b7277fcd8af5475dd0bc30ecece9527be84ca. Dec 16 12:14:44.789000 audit: BPF prog-id=157 op=LOAD Dec 16 12:14:44.792622 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:14:44.792675 kernel: audit: type=1334 audit(1765887284.789:461): prog-id=157 op=LOAD Dec 16 12:14:44.796000 audit: BPF prog-id=158 op=LOAD Dec 16 12:14:44.800641 kernel: audit: type=1334 audit(1765887284.796:462): prog-id=158 op=LOAD Dec 16 12:14:44.796000 audit[3772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.816805 kernel: audit: type=1300 audit(1765887284.796:462): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.817215 containerd[2156]: time="2025-12-16T12:14:44.817184642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-z2wwm,Uid:0eda8925-f461-49ea-935b-722ab33fbb89,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:14:44.836237 kernel: audit: type=1327 audit(1765887284.796:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.796000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:14:44.840997 kernel: audit: type=1334 audit(1765887284.796:463): prog-id=158 op=UNLOAD Dec 16 12:14:44.796000 audit[3772]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.857887 kernel: audit: type=1300 audit(1765887284.796:463): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.875339 kernel: audit: type=1327 audit(1765887284.796:463): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.796000 audit: BPF prog-id=159 op=LOAD Dec 16 12:14:44.879821 kernel: audit: type=1334 audit(1765887284.796:464): prog-id=159 op=LOAD Dec 16 12:14:44.796000 audit[3772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.895548 kernel: audit: type=1300 audit(1765887284.796:464): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.912442 kernel: audit: type=1327 audit(1765887284.796:464): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.800000 audit: BPF prog-id=160 op=LOAD Dec 16 12:14:44.800000 audit[3772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.835000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:14:44.835000 audit[3772]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.835000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:14:44.835000 audit[3772]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.835000 audit: BPF prog-id=161 op=LOAD Dec 16 12:14:44.835000 audit[3772]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3761 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643834333037656465633033653334653535323332653334396237 Dec 16 12:14:44.930169 containerd[2156]: time="2025-12-16T12:14:44.930110892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6fz6p,Uid:646357a4-39a4-4116-96c0-384b653d67cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ad84307edec03e34e55232e349b7277fcd8af5475dd0bc30ecece9527be84ca\"" Dec 16 12:14:44.933919 containerd[2156]: time="2025-12-16T12:14:44.933884643Z" level=info msg="connecting to shim f1519d35d62ad8764c990dd0c34aeb521a2c191909117d88baaa9e8fdbd7fdde" address="unix:///run/containerd/s/eb80a68d62f436a7402450d3a83c27629d9c735eb2f5aea180b30fa019369ce3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:14:44.941440 containerd[2156]: time="2025-12-16T12:14:44.941356962Z" level=info msg="CreateContainer within sandbox \"5ad84307edec03e34e55232e349b7277fcd8af5475dd0bc30ecece9527be84ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:14:44.961653 systemd[1]: Started cri-containerd-f1519d35d62ad8764c990dd0c34aeb521a2c191909117d88baaa9e8fdbd7fdde.scope - libcontainer container f1519d35d62ad8764c990dd0c34aeb521a2c191909117d88baaa9e8fdbd7fdde. Dec 16 12:14:44.966012 containerd[2156]: time="2025-12-16T12:14:44.965973123Z" level=info msg="Container 86ca68681bbc57942efcdb67b22e7b6814ae41081f589779c028125f7d4052af: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:14:44.969000 audit: BPF prog-id=162 op=LOAD Dec 16 12:14:44.970000 audit: BPF prog-id=163 op=LOAD Dec 16 12:14:44.970000 audit[3820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3808 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631353139643335643632616438373634633939306464306333346165 Dec 16 12:14:44.970000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:14:44.970000 audit[3820]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3808 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631353139643335643632616438373634633939306464306333346165 Dec 16 12:14:44.970000 audit: BPF prog-id=164 op=LOAD Dec 16 12:14:44.970000 audit[3820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3808 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631353139643335643632616438373634633939306464306333346165 Dec 16 12:14:44.970000 audit: BPF prog-id=165 op=LOAD Dec 16 12:14:44.970000 audit[3820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3808 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631353139643335643632616438373634633939306464306333346165 Dec 16 12:14:44.970000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:14:44.970000 audit[3820]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3808 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631353139643335643632616438373634633939306464306333346165 Dec 16 12:14:44.970000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:14:44.970000 audit[3820]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3808 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631353139643335643632616438373634633939306464306333346165 Dec 16 12:14:44.970000 audit: BPF prog-id=166 op=LOAD Dec 16 12:14:44.970000 audit[3820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3808 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:44.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631353139643335643632616438373634633939306464306333346165 Dec 16 12:14:44.984862 containerd[2156]: time="2025-12-16T12:14:44.984763749Z" level=info msg="CreateContainer within sandbox \"5ad84307edec03e34e55232e349b7277fcd8af5475dd0bc30ecece9527be84ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86ca68681bbc57942efcdb67b22e7b6814ae41081f589779c028125f7d4052af\"" Dec 16 12:14:44.986146 containerd[2156]: time="2025-12-16T12:14:44.986053235Z" level=info msg="StartContainer for \"86ca68681bbc57942efcdb67b22e7b6814ae41081f589779c028125f7d4052af\"" Dec 16 12:14:44.987951 containerd[2156]: time="2025-12-16T12:14:44.987525810Z" level=info msg="connecting to shim 86ca68681bbc57942efcdb67b22e7b6814ae41081f589779c028125f7d4052af" address="unix:///run/containerd/s/ed2a04829ef3395d656e87b2206f9acccb2fc8de91dc691039c0d4f37861d204" protocol=ttrpc version=3 Dec 16 12:14:44.996103 containerd[2156]: time="2025-12-16T12:14:44.996023228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-z2wwm,Uid:0eda8925-f461-49ea-935b-722ab33fbb89,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f1519d35d62ad8764c990dd0c34aeb521a2c191909117d88baaa9e8fdbd7fdde\"" Dec 16 12:14:44.997258 containerd[2156]: time="2025-12-16T12:14:44.997227945Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:14:45.011847 systemd[1]: Started cri-containerd-86ca68681bbc57942efcdb67b22e7b6814ae41081f589779c028125f7d4052af.scope - libcontainer container 86ca68681bbc57942efcdb67b22e7b6814ae41081f589779c028125f7d4052af. Dec 16 12:14:45.047000 audit: BPF prog-id=167 op=LOAD Dec 16 12:14:45.047000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3761 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836636136383638316262633537393432656663646236376232326537 Dec 16 12:14:45.047000 audit: BPF prog-id=168 op=LOAD Dec 16 12:14:45.047000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3761 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836636136383638316262633537393432656663646236376232326537 Dec 16 12:14:45.047000 audit: BPF prog-id=168 op=UNLOAD Dec 16 12:14:45.047000 audit[3846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3761 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836636136383638316262633537393432656663646236376232326537 Dec 16 12:14:45.047000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:14:45.047000 audit[3846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3761 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836636136383638316262633537393432656663646236376232326537 Dec 16 12:14:45.047000 audit: BPF prog-id=169 op=LOAD Dec 16 12:14:45.047000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3761 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836636136383638316262633537393432656663646236376232326537 Dec 16 12:14:45.068891 containerd[2156]: time="2025-12-16T12:14:45.068846100Z" level=info msg="StartContainer for \"86ca68681bbc57942efcdb67b22e7b6814ae41081f589779c028125f7d4052af\" returns successfully" Dec 16 12:14:45.150000 audit[3909]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.150000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6afc010 a2=0 a3=1 items=0 ppid=3860 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:14:45.152000 audit[3910]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.152000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedc5aaa0 a2=0 a3=1 items=0 ppid=3860 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:14:45.155000 audit[3913]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.155000 audit[3913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffffe4fbe0 a2=0 a3=1 items=0 ppid=3860 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:14:45.156000 audit[3912]: NETFILTER_CFG table=mangle:60 family=10 entries=1 op=nft_register_chain pid=3912 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.156000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdae619b0 a2=0 a3=1 items=0 ppid=3860 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:14:45.160000 audit[3917]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=3917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.160000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbb83510 a2=0 a3=1 items=0 ppid=3860 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:14:45.161000 audit[3918]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=3918 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.161000 audit[3918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe6bb47e0 a2=0 a3=1 items=0 ppid=3860 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:14:45.255000 audit[3919]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.255000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff95ee180 a2=0 a3=1 items=0 ppid=3860 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:14:45.257000 audit[3921]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.257000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffce7c6ba0 a2=0 a3=1 items=0 ppid=3860 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:14:45.260000 audit[3924]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.260000 audit[3924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff1eacbc0 a2=0 a3=1 items=0 ppid=3860 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:14:45.261000 audit[3925]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.261000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea415fb0 a2=0 a3=1 items=0 ppid=3860 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:14:45.263000 audit[3927]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.263000 audit[3927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1401e80 a2=0 a3=1 items=0 ppid=3860 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:14:45.264000 audit[3928]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.264000 audit[3928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4bbf930 a2=0 a3=1 items=0 ppid=3860 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:14:45.267000 audit[3930]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.267000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd6047b60 a2=0 a3=1 items=0 ppid=3860 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:14:45.270000 audit[3933]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.270000 audit[3933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc9c3eac0 a2=0 a3=1 items=0 ppid=3860 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:14:45.271000 audit[3934]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.271000 audit[3934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefc13280 a2=0 a3=1 items=0 ppid=3860 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:14:45.273000 audit[3936]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.273000 audit[3936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd9e78390 a2=0 a3=1 items=0 ppid=3860 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:14:45.274000 audit[3937]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.274000 audit[3937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfcd8d20 a2=0 a3=1 items=0 ppid=3860 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:14:45.277000 audit[3939]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.277000 audit[3939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2e6a930 a2=0 a3=1 items=0 ppid=3860 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:14:45.280000 audit[3942]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.280000 audit[3942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff4370760 a2=0 a3=1 items=0 ppid=3860 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:14:45.283000 audit[3945]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.283000 audit[3945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff84394e0 a2=0 a3=1 items=0 ppid=3860 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:14:45.284000 audit[3946]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.284000 audit[3946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe0ced540 a2=0 a3=1 items=0 ppid=3860 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:14:45.286000 audit[3948]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.286000 audit[3948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc4807da0 a2=0 a3=1 items=0 ppid=3860 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:14:45.289000 audit[3951]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.289000 audit[3951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffb10c4e0 a2=0 a3=1 items=0 ppid=3860 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.289000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:14:45.291000 audit[3952]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.291000 audit[3952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc94549a0 a2=0 a3=1 items=0 ppid=3860 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:14:45.293000 audit[3954]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:14:45.293000 audit[3954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff9927df0 a2=0 a3=1 items=0 ppid=3860 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:14:45.364000 audit[3960]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3960 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:45.364000 audit[3960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff5f1e880 a2=0 a3=1 items=0 ppid=3860 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:45.391000 audit[3960]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3960 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:45.391000 audit[3960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff5f1e880 a2=0 a3=1 items=0 ppid=3860 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:45.393000 audit[3965]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.393000 audit[3965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd62692e0 a2=0 a3=1 items=0 ppid=3860 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:14:45.395000 audit[3967]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.395000 audit[3967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffecec6fa0 a2=0 a3=1 items=0 ppid=3860 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:14:45.398000 audit[3970]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.398000 audit[3970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe1964210 a2=0 a3=1 items=0 ppid=3860 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.398000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:14:45.399000 audit[3971]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.399000 audit[3971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec904550 a2=0 a3=1 items=0 ppid=3860 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:14:45.401000 audit[3973]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.401000 audit[3973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe27f8bb0 a2=0 a3=1 items=0 ppid=3860 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:14:45.402000 audit[3974]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.402000 audit[3974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb526d20 a2=0 a3=1 items=0 ppid=3860 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:14:45.404000 audit[3976]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.404000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc6bcf630 a2=0 a3=1 items=0 ppid=3860 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:14:45.407000 audit[3979]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.407000 audit[3979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc17ae020 a2=0 a3=1 items=0 ppid=3860 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:14:45.408000 audit[3980]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.408000 audit[3980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9210f40 a2=0 a3=1 items=0 ppid=3860 pid=3980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:14:45.410000 audit[3982]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.410000 audit[3982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd2cf14e0 a2=0 a3=1 items=0 ppid=3860 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:14:45.411000 audit[3983]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.411000 audit[3983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5189f10 a2=0 a3=1 items=0 ppid=3860 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:14:45.413000 audit[3985]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.413000 audit[3985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7208ab0 a2=0 a3=1 items=0 ppid=3860 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:14:45.416000 audit[3988]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.416000 audit[3988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcdfe2d30 a2=0 a3=1 items=0 ppid=3860 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:14:45.419000 audit[3991]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.419000 audit[3991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffe62fbf0 a2=0 a3=1 items=0 ppid=3860 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:14:45.420000 audit[3992]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.420000 audit[3992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffecde2310 a2=0 a3=1 items=0 ppid=3860 pid=3992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.420000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:14:45.422000 audit[3994]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.422000 audit[3994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe70e25d0 a2=0 a3=1 items=0 ppid=3860 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:14:45.425000 audit[3997]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.425000 audit[3997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc24f7e40 a2=0 a3=1 items=0 ppid=3860 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:14:45.426000 audit[3998]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.426000 audit[3998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6a20fe0 a2=0 a3=1 items=0 ppid=3860 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:14:45.428000 audit[4000]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.428000 audit[4000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc3d51c70 a2=0 a3=1 items=0 ppid=3860 pid=4000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:14:45.429000 audit[4001]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.429000 audit[4001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2c90ba0 a2=0 a3=1 items=0 ppid=3860 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:14:45.431000 audit[4003]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.431000 audit[4003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffde002650 a2=0 a3=1 items=0 ppid=3860 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:14:45.433000 audit[4006]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:14:45.433000 audit[4006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff18d3b60 a2=0 a3=1 items=0 ppid=3860 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:14:45.436000 audit[4008]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:14:45.436000 audit[4008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffcd639d90 a2=0 a3=1 items=0 ppid=3860 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.436000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:45.437000 audit[4008]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:14:45.437000 audit[4008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffcd639d90 a2=0 a3=1 items=0 ppid=3860 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:45.437000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:45.521044 kubelet[3702]: I1216 12:14:45.520264 3702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6fz6p" podStartSLOduration=1.520247527 podStartE2EDuration="1.520247527s" podCreationTimestamp="2025-12-16 12:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:14:45.520169735 +0000 UTC m=+7.126786998" watchObservedRunningTime="2025-12-16 12:14:45.520247527 +0000 UTC m=+7.126864774" Dec 16 12:14:47.412194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534010685.mount: Deactivated successfully. Dec 16 12:14:49.348699 containerd[2156]: time="2025-12-16T12:14:49.348586229Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:49.351490 containerd[2156]: time="2025-12-16T12:14:49.351449364Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:14:49.354747 containerd[2156]: time="2025-12-16T12:14:49.354699589Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:49.359511 containerd[2156]: time="2025-12-16T12:14:49.359460027Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:14:49.360038 containerd[2156]: time="2025-12-16T12:14:49.359806682Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 4.361422217s" Dec 16 12:14:49.360038 containerd[2156]: time="2025-12-16T12:14:49.359877727Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:14:49.367858 containerd[2156]: time="2025-12-16T12:14:49.367833611Z" level=info msg="CreateContainer within sandbox \"f1519d35d62ad8764c990dd0c34aeb521a2c191909117d88baaa9e8fdbd7fdde\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:14:49.384986 containerd[2156]: time="2025-12-16T12:14:49.384561073Z" level=info msg="Container a786e80702aedbb8c417f353dc460384536bfef75506390242b59563937a5291: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:14:49.400732 containerd[2156]: time="2025-12-16T12:14:49.400685559Z" level=info msg="CreateContainer within sandbox \"f1519d35d62ad8764c990dd0c34aeb521a2c191909117d88baaa9e8fdbd7fdde\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a786e80702aedbb8c417f353dc460384536bfef75506390242b59563937a5291\"" Dec 16 12:14:49.401545 containerd[2156]: time="2025-12-16T12:14:49.401261805Z" level=info msg="StartContainer for \"a786e80702aedbb8c417f353dc460384536bfef75506390242b59563937a5291\"" Dec 16 12:14:49.402272 containerd[2156]: time="2025-12-16T12:14:49.402251336Z" level=info msg="connecting to shim a786e80702aedbb8c417f353dc460384536bfef75506390242b59563937a5291" address="unix:///run/containerd/s/eb80a68d62f436a7402450d3a83c27629d9c735eb2f5aea180b30fa019369ce3" protocol=ttrpc version=3 Dec 16 12:14:49.420649 systemd[1]: Started cri-containerd-a786e80702aedbb8c417f353dc460384536bfef75506390242b59563937a5291.scope - libcontainer container a786e80702aedbb8c417f353dc460384536bfef75506390242b59563937a5291. Dec 16 12:14:49.427000 audit: BPF prog-id=170 op=LOAD Dec 16 12:14:49.427000 audit: BPF prog-id=171 op=LOAD Dec 16 12:14:49.427000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3808 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:49.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383665383037303261656462623863343137663335336463343630 Dec 16 12:14:49.427000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:14:49.427000 audit[4019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3808 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:49.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383665383037303261656462623863343137663335336463343630 Dec 16 12:14:49.427000 audit: BPF prog-id=172 op=LOAD Dec 16 12:14:49.427000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3808 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:49.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383665383037303261656462623863343137663335336463343630 Dec 16 12:14:49.427000 audit: BPF prog-id=173 op=LOAD Dec 16 12:14:49.427000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3808 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:49.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383665383037303261656462623863343137663335336463343630 Dec 16 12:14:49.427000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:14:49.427000 audit[4019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3808 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:49.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383665383037303261656462623863343137663335336463343630 Dec 16 12:14:49.427000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:14:49.427000 audit[4019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3808 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:49.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383665383037303261656462623863343137663335336463343630 Dec 16 12:14:49.427000 audit: BPF prog-id=174 op=LOAD Dec 16 12:14:49.427000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3808 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:49.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383665383037303261656462623863343137663335336463343630 Dec 16 12:14:49.450972 containerd[2156]: time="2025-12-16T12:14:49.450927621Z" level=info msg="StartContainer for \"a786e80702aedbb8c417f353dc460384536bfef75506390242b59563937a5291\" returns successfully" Dec 16 12:14:49.588532 kubelet[3702]: I1216 12:14:49.587704 3702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-z2wwm" podStartSLOduration=1.224100086 podStartE2EDuration="5.58768605s" podCreationTimestamp="2025-12-16 12:14:44 +0000 UTC" firstStartedPulling="2025-12-16 12:14:44.996979073 +0000 UTC m=+6.603596320" lastFinishedPulling="2025-12-16 12:14:49.360565029 +0000 UTC m=+10.967182284" observedRunningTime="2025-12-16 12:14:49.536895703 +0000 UTC m=+11.143512958" watchObservedRunningTime="2025-12-16 12:14:49.58768605 +0000 UTC m=+11.194303297" Dec 16 12:14:54.593094 sudo[2657]: pam_unix(sudo:session): session closed for user root Dec 16 12:14:54.593000 audit[2657]: USER_END pid=2657 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.611898 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:14:54.612003 kernel: audit: type=1106 audit(1765887294.593:541): pid=2657 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.610000 audit[2657]: CRED_DISP pid=2657 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.629814 kernel: audit: type=1104 audit(1765887294.610:542): pid=2657 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.683294 sshd[2656]: Connection closed by 10.200.16.10 port 41210 Dec 16 12:14:54.683141 sshd-session[2652]: pam_unix(sshd:session): session closed for user core Dec 16 12:14:54.684000 audit[2652]: USER_END pid=2652 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:54.687048 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:41210.service: Deactivated successfully. Dec 16 12:14:54.693147 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:14:54.693396 systemd[1]: session-10.scope: Consumed 3.541s CPU time, 222.3M memory peak. Dec 16 12:14:54.710559 systemd-logind[2131]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:14:54.711446 systemd-logind[2131]: Removed session 10. Dec 16 12:14:54.684000 audit[2652]: CRED_DISP pid=2652 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:54.726210 kernel: audit: type=1106 audit(1765887294.684:543): pid=2652 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:54.726355 kernel: audit: type=1104 audit(1765887294.684:544): pid=2652 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:14:54.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:41210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.744376 kernel: audit: type=1131 audit(1765887294.685:545): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.37:22-10.200.16.10:41210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:56.117000 audit[4097]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:56.117000 audit[4097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff75077c0 a2=0 a3=1 items=0 ppid=3860 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:56.147856 kernel: audit: type=1325 audit(1765887296.117:546): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:56.147976 kernel: audit: type=1300 audit(1765887296.117:546): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff75077c0 a2=0 a3=1 items=0 ppid=3860 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:56.117000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:56.162933 kernel: audit: type=1327 audit(1765887296.117:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:56.167000 audit[4097]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:56.167000 audit[4097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff75077c0 a2=0 a3=1 items=0 ppid=3860 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:56.196561 kernel: audit: type=1325 audit(1765887296.167:547): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:56.196657 kernel: audit: type=1300 audit(1765887296.167:547): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff75077c0 a2=0 a3=1 items=0 ppid=3860 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:56.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:56.204000 audit[4099]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:56.204000 audit[4099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd28e17f0 a2=0 a3=1 items=0 ppid=3860 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:56.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:56.207000 audit[4099]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:56.207000 audit[4099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd28e17f0 a2=0 a3=1 items=0 ppid=3860 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:56.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:58.619000 audit[4101]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4101 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:58.619000 audit[4101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc3fa8da0 a2=0 a3=1 items=0 ppid=3860 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:58.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:58.633000 audit[4101]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4101 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:58.633000 audit[4101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc3fa8da0 a2=0 a3=1 items=0 ppid=3860 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:58.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:58.674000 audit[4103]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:58.674000 audit[4103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffca6eebc0 a2=0 a3=1 items=0 ppid=3860 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:58.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:58.683000 audit[4103]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:58.683000 audit[4103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca6eebc0 a2=0 a3=1 items=0 ppid=3860 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:58.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:59.721172 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 12:14:59.721305 kernel: audit: type=1325 audit(1765887299.707:554): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:59.707000 audit[4105]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:59.707000 audit[4105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcaf30de0 a2=0 a3=1 items=0 ppid=3860 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:59.756903 kernel: audit: type=1300 audit(1765887299.707:554): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcaf30de0 a2=0 a3=1 items=0 ppid=3860 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:59.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:59.768000 audit[4105]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:59.770662 kernel: audit: type=1327 audit(1765887299.707:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:59.768000 audit[4105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcaf30de0 a2=0 a3=1 items=0 ppid=3860 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:59.783598 kernel: audit: type=1325 audit(1765887299.768:555): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4105 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:14:59.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:14:59.810943 kernel: audit: type=1300 audit(1765887299.768:555): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcaf30de0 a2=0 a3=1 items=0 ppid=3860 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:14:59.811018 kernel: audit: type=1327 audit(1765887299.768:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:00.812000 audit[4107]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:00.812000 audit[4107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd91e3f70 a2=0 a3=1 items=0 ppid=3860 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:00.841358 kernel: audit: type=1325 audit(1765887300.812:556): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:00.841447 kernel: audit: type=1300 audit(1765887300.812:556): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd91e3f70 a2=0 a3=1 items=0 ppid=3860 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:00.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:00.851031 kernel: audit: type=1327 audit(1765887300.812:556): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:00.841000 audit[4107]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:00.863176 kernel: audit: type=1325 audit(1765887300.841:557): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:00.841000 audit[4107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd91e3f70 a2=0 a3=1 items=0 ppid=3860 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:00.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:00.883564 systemd[1]: Created slice kubepods-besteffort-pod69c0bcfe_1e5c_4954_a486_cfb50eef1149.slice - libcontainer container kubepods-besteffort-pod69c0bcfe_1e5c_4954_a486_cfb50eef1149.slice. Dec 16 12:15:00.888202 kubelet[3702]: I1216 12:15:00.888098 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2skd9\" (UniqueName: \"kubernetes.io/projected/69c0bcfe-1e5c-4954-a486-cfb50eef1149-kube-api-access-2skd9\") pod \"calico-typha-6f7b4b5bff-9r98t\" (UID: \"69c0bcfe-1e5c-4954-a486-cfb50eef1149\") " pod="calico-system/calico-typha-6f7b4b5bff-9r98t" Dec 16 12:15:00.888202 kubelet[3702]: I1216 12:15:00.888131 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c0bcfe-1e5c-4954-a486-cfb50eef1149-tigera-ca-bundle\") pod \"calico-typha-6f7b4b5bff-9r98t\" (UID: \"69c0bcfe-1e5c-4954-a486-cfb50eef1149\") " pod="calico-system/calico-typha-6f7b4b5bff-9r98t" Dec 16 12:15:00.888202 kubelet[3702]: I1216 12:15:00.888145 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/69c0bcfe-1e5c-4954-a486-cfb50eef1149-typha-certs\") pod \"calico-typha-6f7b4b5bff-9r98t\" (UID: \"69c0bcfe-1e5c-4954-a486-cfb50eef1149\") " pod="calico-system/calico-typha-6f7b4b5bff-9r98t" Dec 16 12:15:01.079601 systemd[1]: Created slice kubepods-besteffort-pod0b01593c_a7b3_47d1_9a86_5201322c4ebd.slice - libcontainer container kubepods-besteffort-pod0b01593c_a7b3_47d1_9a86_5201322c4ebd.slice. Dec 16 12:15:01.090548 kubelet[3702]: I1216 12:15:01.090382 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0b01593c-a7b3-47d1-9a86-5201322c4ebd-policysync\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.090950 kubelet[3702]: I1216 12:15:01.090616 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b01593c-a7b3-47d1-9a86-5201322c4ebd-tigera-ca-bundle\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.090950 kubelet[3702]: I1216 12:15:01.090663 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0b01593c-a7b3-47d1-9a86-5201322c4ebd-cni-net-dir\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.090950 kubelet[3702]: I1216 12:15:01.090676 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0b01593c-a7b3-47d1-9a86-5201322c4ebd-cni-bin-dir\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.090950 kubelet[3702]: I1216 12:15:01.090687 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjz82\" (UniqueName: \"kubernetes.io/projected/0b01593c-a7b3-47d1-9a86-5201322c4ebd-kube-api-access-cjz82\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.090950 kubelet[3702]: I1216 12:15:01.090699 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b01593c-a7b3-47d1-9a86-5201322c4ebd-lib-modules\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.091040 kubelet[3702]: I1216 12:15:01.090711 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0b01593c-a7b3-47d1-9a86-5201322c4ebd-var-run-calico\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.091040 kubelet[3702]: I1216 12:15:01.090721 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0b01593c-a7b3-47d1-9a86-5201322c4ebd-cni-log-dir\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.091040 kubelet[3702]: I1216 12:15:01.090730 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0b01593c-a7b3-47d1-9a86-5201322c4ebd-node-certs\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.091040 kubelet[3702]: I1216 12:15:01.090739 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b01593c-a7b3-47d1-9a86-5201322c4ebd-var-lib-calico\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.091040 kubelet[3702]: I1216 12:15:01.090747 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b01593c-a7b3-47d1-9a86-5201322c4ebd-xtables-lock\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.091112 kubelet[3702]: I1216 12:15:01.090758 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0b01593c-a7b3-47d1-9a86-5201322c4ebd-flexvol-driver-host\") pod \"calico-node-rqrnv\" (UID: \"0b01593c-a7b3-47d1-9a86-5201322c4ebd\") " pod="calico-system/calico-node-rqrnv" Dec 16 12:15:01.189573 containerd[2156]: time="2025-12-16T12:15:01.189525520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f7b4b5bff-9r98t,Uid:69c0bcfe-1e5c-4954-a486-cfb50eef1149,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:01.193933 kubelet[3702]: E1216 12:15:01.193720 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.193933 kubelet[3702]: W1216 12:15:01.193786 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.193933 kubelet[3702]: E1216 12:15:01.193805 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.195445 kubelet[3702]: E1216 12:15:01.195393 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.195445 kubelet[3702]: W1216 12:15:01.195406 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.195668 kubelet[3702]: E1216 12:15:01.195418 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.196030 kubelet[3702]: E1216 12:15:01.195870 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.196030 kubelet[3702]: W1216 12:15:01.195880 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.196030 kubelet[3702]: E1216 12:15:01.195890 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.204087 kubelet[3702]: E1216 12:15:01.204040 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.205395 kubelet[3702]: W1216 12:15:01.204140 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.205395 kubelet[3702]: E1216 12:15:01.204156 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.209111 kubelet[3702]: E1216 12:15:01.209072 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.209111 kubelet[3702]: W1216 12:15:01.209086 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.209111 kubelet[3702]: E1216 12:15:01.209097 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.238204 containerd[2156]: time="2025-12-16T12:15:01.238157853Z" level=info msg="connecting to shim 139efe624760698c00ca7fef0392d28a8b5e4c7bbc0fa0bc44d055c1817bdcc5" address="unix:///run/containerd/s/1b7d72aaf00f2fc2852ea1858e7a62867859c993af5a81e29f36de4aaafd123e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:01.259661 systemd[1]: Started cri-containerd-139efe624760698c00ca7fef0392d28a8b5e4c7bbc0fa0bc44d055c1817bdcc5.scope - libcontainer container 139efe624760698c00ca7fef0392d28a8b5e4c7bbc0fa0bc44d055c1817bdcc5. Dec 16 12:15:01.273180 kubelet[3702]: E1216 12:15:01.272902 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:01.280000 audit: BPF prog-id=175 op=LOAD Dec 16 12:15:01.280000 audit: BPF prog-id=176 op=LOAD Dec 16 12:15:01.280000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4125 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133396566653632343736303639386330306361376665663033393264 Dec 16 12:15:01.281000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:15:01.281000 audit[4137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4125 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133396566653632343736303639386330306361376665663033393264 Dec 16 12:15:01.281000 audit: BPF prog-id=177 op=LOAD Dec 16 12:15:01.281000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4125 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133396566653632343736303639386330306361376665663033393264 Dec 16 12:15:01.281000 audit: BPF prog-id=178 op=LOAD Dec 16 12:15:01.281000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4125 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133396566653632343736303639386330306361376665663033393264 Dec 16 12:15:01.281000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:15:01.281000 audit[4137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4125 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133396566653632343736303639386330306361376665663033393264 Dec 16 12:15:01.281000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:15:01.281000 audit[4137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4125 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133396566653632343736303639386330306361376665663033393264 Dec 16 12:15:01.281000 audit: BPF prog-id=179 op=LOAD Dec 16 12:15:01.281000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4125 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133396566653632343736303639386330306361376665663033393264 Dec 16 12:15:01.291254 kubelet[3702]: E1216 12:15:01.291230 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.291446 kubelet[3702]: W1216 12:15:01.291349 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.291446 kubelet[3702]: E1216 12:15:01.291372 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.291722 kubelet[3702]: E1216 12:15:01.291685 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.291861 kubelet[3702]: W1216 12:15:01.291699 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.291861 kubelet[3702]: E1216 12:15:01.291800 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.292257 kubelet[3702]: E1216 12:15:01.292091 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.292257 kubelet[3702]: W1216 12:15:01.292197 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.292257 kubelet[3702]: E1216 12:15:01.292209 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.292808 kubelet[3702]: E1216 12:15:01.292794 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.293007 kubelet[3702]: W1216 12:15:01.292880 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.293007 kubelet[3702]: E1216 12:15:01.292898 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.293409 kubelet[3702]: E1216 12:15:01.293352 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.293409 kubelet[3702]: W1216 12:15:01.293365 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.293409 kubelet[3702]: E1216 12:15:01.293376 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.293704 kubelet[3702]: E1216 12:15:01.293632 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.293704 kubelet[3702]: W1216 12:15:01.293641 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.293704 kubelet[3702]: E1216 12:15:01.293650 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.293908 kubelet[3702]: E1216 12:15:01.293862 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.293908 kubelet[3702]: W1216 12:15:01.293870 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.293908 kubelet[3702]: E1216 12:15:01.293878 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.294396 kubelet[3702]: E1216 12:15:01.294323 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.294396 kubelet[3702]: W1216 12:15:01.294336 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.294396 kubelet[3702]: E1216 12:15:01.294346 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.294765 kubelet[3702]: E1216 12:15:01.294675 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.294765 kubelet[3702]: W1216 12:15:01.294687 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.294765 kubelet[3702]: E1216 12:15:01.294696 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.294914 kubelet[3702]: E1216 12:15:01.294902 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.294963 kubelet[3702]: W1216 12:15:01.294954 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.295077 kubelet[3702]: E1216 12:15:01.294995 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.295173 kubelet[3702]: E1216 12:15:01.295164 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.295303 kubelet[3702]: W1216 12:15:01.295215 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.295303 kubelet[3702]: E1216 12:15:01.295229 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.295689 kubelet[3702]: E1216 12:15:01.295651 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.295689 kubelet[3702]: W1216 12:15:01.295663 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.295689 kubelet[3702]: E1216 12:15:01.295673 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.296046 kubelet[3702]: E1216 12:15:01.295989 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.296046 kubelet[3702]: W1216 12:15:01.296004 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.296046 kubelet[3702]: E1216 12:15:01.296014 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.296370 kubelet[3702]: E1216 12:15:01.296318 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.296370 kubelet[3702]: W1216 12:15:01.296329 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.296370 kubelet[3702]: E1216 12:15:01.296338 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.297548 kubelet[3702]: E1216 12:15:01.296687 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.297548 kubelet[3702]: W1216 12:15:01.296699 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.297548 kubelet[3702]: E1216 12:15:01.296710 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.297636 kubelet[3702]: E1216 12:15:01.297611 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.297636 kubelet[3702]: W1216 12:15:01.297623 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.297668 kubelet[3702]: E1216 12:15:01.297635 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.297832 kubelet[3702]: E1216 12:15:01.297819 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.297832 kubelet[3702]: W1216 12:15:01.297830 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.297964 kubelet[3702]: E1216 12:15:01.297839 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.298047 kubelet[3702]: E1216 12:15:01.298033 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.298047 kubelet[3702]: W1216 12:15:01.298042 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.298099 kubelet[3702]: E1216 12:15:01.298049 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.298160 kubelet[3702]: E1216 12:15:01.298148 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.298160 kubelet[3702]: W1216 12:15:01.298157 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.298203 kubelet[3702]: E1216 12:15:01.298163 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.298306 kubelet[3702]: E1216 12:15:01.298293 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.298306 kubelet[3702]: W1216 12:15:01.298302 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.298368 kubelet[3702]: E1216 12:15:01.298308 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.298545 kubelet[3702]: E1216 12:15:01.298530 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.298545 kubelet[3702]: W1216 12:15:01.298541 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.298610 kubelet[3702]: E1216 12:15:01.298549 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.298610 kubelet[3702]: I1216 12:15:01.298568 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77a7712d-2394-4a4f-8873-2dd27305d176-registration-dir\") pod \"csi-node-driver-lrs86\" (UID: \"77a7712d-2394-4a4f-8873-2dd27305d176\") " pod="calico-system/csi-node-driver-lrs86" Dec 16 12:15:01.298760 kubelet[3702]: E1216 12:15:01.298746 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.298760 kubelet[3702]: W1216 12:15:01.298758 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.298813 kubelet[3702]: E1216 12:15:01.298768 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.298813 kubelet[3702]: I1216 12:15:01.298800 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77a7712d-2394-4a4f-8873-2dd27305d176-socket-dir\") pod \"csi-node-driver-lrs86\" (UID: \"77a7712d-2394-4a4f-8873-2dd27305d176\") " pod="calico-system/csi-node-driver-lrs86" Dec 16 12:15:01.298962 kubelet[3702]: E1216 12:15:01.298949 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.298962 kubelet[3702]: W1216 12:15:01.298958 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.299009 kubelet[3702]: E1216 12:15:01.298968 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.299009 kubelet[3702]: I1216 12:15:01.298982 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxckm\" (UniqueName: \"kubernetes.io/projected/77a7712d-2394-4a4f-8873-2dd27305d176-kube-api-access-pxckm\") pod \"csi-node-driver-lrs86\" (UID: \"77a7712d-2394-4a4f-8873-2dd27305d176\") " pod="calico-system/csi-node-driver-lrs86" Dec 16 12:15:01.299126 kubelet[3702]: E1216 12:15:01.299106 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.299126 kubelet[3702]: W1216 12:15:01.299116 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.299126 kubelet[3702]: E1216 12:15:01.299122 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.299264 kubelet[3702]: I1216 12:15:01.299137 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77a7712d-2394-4a4f-8873-2dd27305d176-varrun\") pod \"csi-node-driver-lrs86\" (UID: \"77a7712d-2394-4a4f-8873-2dd27305d176\") " pod="calico-system/csi-node-driver-lrs86" Dec 16 12:15:01.299329 kubelet[3702]: E1216 12:15:01.299314 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.299329 kubelet[3702]: W1216 12:15:01.299324 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.299373 kubelet[3702]: E1216 12:15:01.299331 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.299373 kubelet[3702]: I1216 12:15:01.299347 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77a7712d-2394-4a4f-8873-2dd27305d176-kubelet-dir\") pod \"csi-node-driver-lrs86\" (UID: \"77a7712d-2394-4a4f-8873-2dd27305d176\") " pod="calico-system/csi-node-driver-lrs86" Dec 16 12:15:01.299554 kubelet[3702]: E1216 12:15:01.299538 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.299554 kubelet[3702]: W1216 12:15:01.299550 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.299617 kubelet[3702]: E1216 12:15:01.299557 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.299677 kubelet[3702]: E1216 12:15:01.299668 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.299677 kubelet[3702]: W1216 12:15:01.299675 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.299733 kubelet[3702]: E1216 12:15:01.299681 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.299821 kubelet[3702]: E1216 12:15:01.299811 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.299821 kubelet[3702]: W1216 12:15:01.299819 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.299869 kubelet[3702]: E1216 12:15:01.299825 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.299944 kubelet[3702]: E1216 12:15:01.299930 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.299944 kubelet[3702]: W1216 12:15:01.299938 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.299944 kubelet[3702]: E1216 12:15:01.299944 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.300063 kubelet[3702]: E1216 12:15:01.300049 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.300063 kubelet[3702]: W1216 12:15:01.300058 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.300063 kubelet[3702]: E1216 12:15:01.300063 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.300183 kubelet[3702]: E1216 12:15:01.300153 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.300183 kubelet[3702]: W1216 12:15:01.300160 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.300183 kubelet[3702]: E1216 12:15:01.300165 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.300376 kubelet[3702]: E1216 12:15:01.300362 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.300376 kubelet[3702]: W1216 12:15:01.300371 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.300376 kubelet[3702]: E1216 12:15:01.300378 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.300506 kubelet[3702]: E1216 12:15:01.300471 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.300506 kubelet[3702]: W1216 12:15:01.300504 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.300560 kubelet[3702]: E1216 12:15:01.300512 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.300700 kubelet[3702]: E1216 12:15:01.300690 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.300700 kubelet[3702]: W1216 12:15:01.300697 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.300700 kubelet[3702]: E1216 12:15:01.300703 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.300798 kubelet[3702]: E1216 12:15:01.300787 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.300798 kubelet[3702]: W1216 12:15:01.300793 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.300798 kubelet[3702]: E1216 12:15:01.300798 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.314749 containerd[2156]: time="2025-12-16T12:15:01.314682965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f7b4b5bff-9r98t,Uid:69c0bcfe-1e5c-4954-a486-cfb50eef1149,Namespace:calico-system,Attempt:0,} returns sandbox id \"139efe624760698c00ca7fef0392d28a8b5e4c7bbc0fa0bc44d055c1817bdcc5\"" Dec 16 12:15:01.317243 containerd[2156]: time="2025-12-16T12:15:01.317060124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:15:01.382804 containerd[2156]: time="2025-12-16T12:15:01.382699766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rqrnv,Uid:0b01593c-a7b3-47d1-9a86-5201322c4ebd,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:01.400762 kubelet[3702]: E1216 12:15:01.400605 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.400762 kubelet[3702]: W1216 12:15:01.400630 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.400762 kubelet[3702]: E1216 12:15:01.400653 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.401092 kubelet[3702]: E1216 12:15:01.401021 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.401092 kubelet[3702]: W1216 12:15:01.401046 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.401092 kubelet[3702]: E1216 12:15:01.401057 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.401285 kubelet[3702]: E1216 12:15:01.401264 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.401285 kubelet[3702]: W1216 12:15:01.401281 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.401347 kubelet[3702]: E1216 12:15:01.401293 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.401527 kubelet[3702]: E1216 12:15:01.401513 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.401527 kubelet[3702]: W1216 12:15:01.401525 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.401670 kubelet[3702]: E1216 12:15:01.401535 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.401741 kubelet[3702]: E1216 12:15:01.401728 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.401741 kubelet[3702]: W1216 12:15:01.401738 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.401798 kubelet[3702]: E1216 12:15:01.401747 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.401981 kubelet[3702]: E1216 12:15:01.401967 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.401981 kubelet[3702]: W1216 12:15:01.401978 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.402038 kubelet[3702]: E1216 12:15:01.401993 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.402171 kubelet[3702]: E1216 12:15:01.402156 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.402171 kubelet[3702]: W1216 12:15:01.402167 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.402228 kubelet[3702]: E1216 12:15:01.402174 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.403887 kubelet[3702]: E1216 12:15:01.402607 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.403887 kubelet[3702]: W1216 12:15:01.402617 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.403887 kubelet[3702]: E1216 12:15:01.402626 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.403887 kubelet[3702]: E1216 12:15:01.402800 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.403887 kubelet[3702]: W1216 12:15:01.402810 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.403887 kubelet[3702]: E1216 12:15:01.402818 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.403887 kubelet[3702]: E1216 12:15:01.402963 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.403887 kubelet[3702]: W1216 12:15:01.402971 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.403887 kubelet[3702]: E1216 12:15:01.402978 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.403887 kubelet[3702]: E1216 12:15:01.403109 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.404058 kubelet[3702]: W1216 12:15:01.403116 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.404058 kubelet[3702]: E1216 12:15:01.403125 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.404058 kubelet[3702]: E1216 12:15:01.403544 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.404058 kubelet[3702]: W1216 12:15:01.403555 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.404058 kubelet[3702]: E1216 12:15:01.403565 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.404615 kubelet[3702]: E1216 12:15:01.404187 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.404615 kubelet[3702]: W1216 12:15:01.404198 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.404615 kubelet[3702]: E1216 12:15:01.404208 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.404615 kubelet[3702]: E1216 12:15:01.404561 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.404615 kubelet[3702]: W1216 12:15:01.404572 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.404615 kubelet[3702]: E1216 12:15:01.404582 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.404924 kubelet[3702]: E1216 12:15:01.404907 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.404924 kubelet[3702]: W1216 12:15:01.404921 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.405109 kubelet[3702]: E1216 12:15:01.404941 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.405245 kubelet[3702]: E1216 12:15:01.405229 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.405245 kubelet[3702]: W1216 12:15:01.405242 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.405309 kubelet[3702]: E1216 12:15:01.405252 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.405627 kubelet[3702]: E1216 12:15:01.405613 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.405627 kubelet[3702]: W1216 12:15:01.405624 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.405691 kubelet[3702]: E1216 12:15:01.405634 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.405958 kubelet[3702]: E1216 12:15:01.405943 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.405958 kubelet[3702]: W1216 12:15:01.405956 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.406015 kubelet[3702]: E1216 12:15:01.405966 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.407270 kubelet[3702]: E1216 12:15:01.407235 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.407270 kubelet[3702]: W1216 12:15:01.407259 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.407270 kubelet[3702]: E1216 12:15:01.407273 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.407697 kubelet[3702]: E1216 12:15:01.407430 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.407697 kubelet[3702]: W1216 12:15:01.407440 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.407697 kubelet[3702]: E1216 12:15:01.407450 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.408244 kubelet[3702]: E1216 12:15:01.407845 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.408244 kubelet[3702]: W1216 12:15:01.407857 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.408244 kubelet[3702]: E1216 12:15:01.407867 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.408629 kubelet[3702]: E1216 12:15:01.408515 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.408629 kubelet[3702]: W1216 12:15:01.408528 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.408629 kubelet[3702]: E1216 12:15:01.408538 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.409617 kubelet[3702]: E1216 12:15:01.409512 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.409617 kubelet[3702]: W1216 12:15:01.409525 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.409617 kubelet[3702]: E1216 12:15:01.409535 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.410133 kubelet[3702]: E1216 12:15:01.410112 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.410227 kubelet[3702]: W1216 12:15:01.410209 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.410285 kubelet[3702]: E1216 12:15:01.410274 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.410491 kubelet[3702]: E1216 12:15:01.410470 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.410791 kubelet[3702]: W1216 12:15:01.410549 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.410791 kubelet[3702]: E1216 12:15:01.410562 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.425999 kubelet[3702]: E1216 12:15:01.425978 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:01.426184 kubelet[3702]: W1216 12:15:01.426123 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:01.426184 kubelet[3702]: E1216 12:15:01.426146 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:01.431158 containerd[2156]: time="2025-12-16T12:15:01.430895096Z" level=info msg="connecting to shim 7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276" address="unix:///run/containerd/s/38ab9c5ccefe7a58693f9e4388450dedc2a7ba2356c3cc11a96ba45a8157e4c0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:01.452683 systemd[1]: Started cri-containerd-7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276.scope - libcontainer container 7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276. Dec 16 12:15:01.458000 audit: BPF prog-id=180 op=LOAD Dec 16 12:15:01.459000 audit: BPF prog-id=181 op=LOAD Dec 16 12:15:01.459000 audit[4253]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=4243 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393063623338343232396335323765613336353439306333656363 Dec 16 12:15:01.459000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:15:01.459000 audit[4253]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393063623338343232396335323765613336353439306333656363 Dec 16 12:15:01.459000 audit: BPF prog-id=182 op=LOAD Dec 16 12:15:01.459000 audit[4253]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=4243 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393063623338343232396335323765613336353439306333656363 Dec 16 12:15:01.459000 audit: BPF prog-id=183 op=LOAD Dec 16 12:15:01.459000 audit[4253]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=4243 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393063623338343232396335323765613336353439306333656363 Dec 16 12:15:01.459000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:15:01.459000 audit[4253]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393063623338343232396335323765613336353439306333656363 Dec 16 12:15:01.459000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:15:01.459000 audit[4253]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393063623338343232396335323765613336353439306333656363 Dec 16 12:15:01.459000 audit: BPF prog-id=184 op=LOAD Dec 16 12:15:01.459000 audit[4253]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=4243 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393063623338343232396335323765613336353439306333656363 Dec 16 12:15:01.476869 containerd[2156]: time="2025-12-16T12:15:01.476822078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rqrnv,Uid:0b01593c-a7b3-47d1-9a86-5201322c4ebd,Namespace:calico-system,Attempt:0,} returns sandbox id \"7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276\"" Dec 16 12:15:01.879000 audit[4281]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:01.879000 audit[4281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd9adf500 a2=0 a3=1 items=0 ppid=3860 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:01.883000 audit[4281]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=4281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:01.883000 audit[4281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd9adf500 a2=0 a3=1 items=0 ppid=3860 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:01.883000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:02.759799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82445966.mount: Deactivated successfully. Dec 16 12:15:03.480857 kubelet[3702]: E1216 12:15:03.480735 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:03.539706 containerd[2156]: time="2025-12-16T12:15:03.539373306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:03.543156 containerd[2156]: time="2025-12-16T12:15:03.542655372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 12:15:03.547045 containerd[2156]: time="2025-12-16T12:15:03.547008129Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:03.551916 containerd[2156]: time="2025-12-16T12:15:03.551635706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:03.551916 containerd[2156]: time="2025-12-16T12:15:03.551822677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.234738174s" Dec 16 12:15:03.551916 containerd[2156]: time="2025-12-16T12:15:03.551846247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:15:03.553302 containerd[2156]: time="2025-12-16T12:15:03.553274783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:15:03.568627 containerd[2156]: time="2025-12-16T12:15:03.568591361Z" level=info msg="CreateContainer within sandbox \"139efe624760698c00ca7fef0392d28a8b5e4c7bbc0fa0bc44d055c1817bdcc5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:15:03.590350 containerd[2156]: time="2025-12-16T12:15:03.588875223Z" level=info msg="Container 64cf12dc9456a8222ac5987e6ae192a85dcf691038856998105fa4a9e60fa99e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:03.589767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572673832.mount: Deactivated successfully. Dec 16 12:15:03.609382 containerd[2156]: time="2025-12-16T12:15:03.609339767Z" level=info msg="CreateContainer within sandbox \"139efe624760698c00ca7fef0392d28a8b5e4c7bbc0fa0bc44d055c1817bdcc5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"64cf12dc9456a8222ac5987e6ae192a85dcf691038856998105fa4a9e60fa99e\"" Dec 16 12:15:03.610858 containerd[2156]: time="2025-12-16T12:15:03.610822548Z" level=info msg="StartContainer for \"64cf12dc9456a8222ac5987e6ae192a85dcf691038856998105fa4a9e60fa99e\"" Dec 16 12:15:03.612252 containerd[2156]: time="2025-12-16T12:15:03.612224593Z" level=info msg="connecting to shim 64cf12dc9456a8222ac5987e6ae192a85dcf691038856998105fa4a9e60fa99e" address="unix:///run/containerd/s/1b7d72aaf00f2fc2852ea1858e7a62867859c993af5a81e29f36de4aaafd123e" protocol=ttrpc version=3 Dec 16 12:15:03.636665 systemd[1]: Started cri-containerd-64cf12dc9456a8222ac5987e6ae192a85dcf691038856998105fa4a9e60fa99e.scope - libcontainer container 64cf12dc9456a8222ac5987e6ae192a85dcf691038856998105fa4a9e60fa99e. Dec 16 12:15:03.646000 audit: BPF prog-id=185 op=LOAD Dec 16 12:15:03.646000 audit: BPF prog-id=186 op=LOAD Dec 16 12:15:03.646000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4125 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:03.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634636631326463393435366138323232616335393837653661653139 Dec 16 12:15:03.646000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:15:03.646000 audit[4292]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4125 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:03.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634636631326463393435366138323232616335393837653661653139 Dec 16 12:15:03.647000 audit: BPF prog-id=187 op=LOAD Dec 16 12:15:03.647000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4125 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:03.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634636631326463393435366138323232616335393837653661653139 Dec 16 12:15:03.647000 audit: BPF prog-id=188 op=LOAD Dec 16 12:15:03.647000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4125 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:03.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634636631326463393435366138323232616335393837653661653139 Dec 16 12:15:03.647000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:15:03.647000 audit[4292]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4125 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:03.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634636631326463393435366138323232616335393837653661653139 Dec 16 12:15:03.647000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:15:03.647000 audit[4292]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4125 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:03.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634636631326463393435366138323232616335393837653661653139 Dec 16 12:15:03.647000 audit: BPF prog-id=189 op=LOAD Dec 16 12:15:03.647000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4125 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:03.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634636631326463393435366138323232616335393837653661653139 Dec 16 12:15:03.675167 containerd[2156]: time="2025-12-16T12:15:03.675135945Z" level=info msg="StartContainer for \"64cf12dc9456a8222ac5987e6ae192a85dcf691038856998105fa4a9e60fa99e\" returns successfully" Dec 16 12:15:04.616140 kubelet[3702]: E1216 12:15:04.616094 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.616140 kubelet[3702]: W1216 12:15:04.616123 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.616140 kubelet[3702]: E1216 12:15:04.616145 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.616696 kubelet[3702]: E1216 12:15:04.616291 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.616696 kubelet[3702]: W1216 12:15:04.616297 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.616696 kubelet[3702]: E1216 12:15:04.616332 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.616696 kubelet[3702]: E1216 12:15:04.616436 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.616696 kubelet[3702]: W1216 12:15:04.616442 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.616696 kubelet[3702]: E1216 12:15:04.616449 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.616696 kubelet[3702]: E1216 12:15:04.616582 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.616696 kubelet[3702]: W1216 12:15:04.616588 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.616696 kubelet[3702]: E1216 12:15:04.616595 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.616857 kubelet[3702]: E1216 12:15:04.616731 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.616857 kubelet[3702]: W1216 12:15:04.616737 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.616857 kubelet[3702]: E1216 12:15:04.616744 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.616857 kubelet[3702]: E1216 12:15:04.616833 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.616857 kubelet[3702]: W1216 12:15:04.616837 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.616857 kubelet[3702]: E1216 12:15:04.616843 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.617005 kubelet[3702]: E1216 12:15:04.616922 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.617005 kubelet[3702]: W1216 12:15:04.616927 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.617005 kubelet[3702]: E1216 12:15:04.616931 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.617064 kubelet[3702]: E1216 12:15:04.617014 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.617064 kubelet[3702]: W1216 12:15:04.617018 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.617064 kubelet[3702]: E1216 12:15:04.617023 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.617133 kubelet[3702]: E1216 12:15:04.617119 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.617133 kubelet[3702]: W1216 12:15:04.617130 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.617169 kubelet[3702]: E1216 12:15:04.617137 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.617240 kubelet[3702]: E1216 12:15:04.617223 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.617240 kubelet[3702]: W1216 12:15:04.617234 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.617240 kubelet[3702]: E1216 12:15:04.617239 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.617330 kubelet[3702]: E1216 12:15:04.617318 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.617330 kubelet[3702]: W1216 12:15:04.617326 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.617372 kubelet[3702]: E1216 12:15:04.617332 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.617434 kubelet[3702]: E1216 12:15:04.617414 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.617434 kubelet[3702]: W1216 12:15:04.617421 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.617434 kubelet[3702]: E1216 12:15:04.617426 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.617549 kubelet[3702]: E1216 12:15:04.617542 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.617549 kubelet[3702]: W1216 12:15:04.617547 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.617608 kubelet[3702]: E1216 12:15:04.617553 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.617648 kubelet[3702]: E1216 12:15:04.617636 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.617648 kubelet[3702]: W1216 12:15:04.617644 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.617679 kubelet[3702]: E1216 12:15:04.617650 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.617744 kubelet[3702]: E1216 12:15:04.617730 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.617744 kubelet[3702]: W1216 12:15:04.617739 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.617744 kubelet[3702]: E1216 12:15:04.617744 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.623842 kubelet[3702]: E1216 12:15:04.623807 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.623842 kubelet[3702]: W1216 12:15:04.623825 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.623842 kubelet[3702]: E1216 12:15:04.623836 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.624013 kubelet[3702]: E1216 12:15:04.623991 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.624013 kubelet[3702]: W1216 12:15:04.623998 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.624013 kubelet[3702]: E1216 12:15:04.624005 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.624495 kubelet[3702]: E1216 12:15:04.624162 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.624495 kubelet[3702]: W1216 12:15:04.624172 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.624495 kubelet[3702]: E1216 12:15:04.624179 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.624950 kubelet[3702]: E1216 12:15:04.624848 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.625189 kubelet[3702]: W1216 12:15:04.625128 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.625414 kubelet[3702]: E1216 12:15:04.625301 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.625834 kubelet[3702]: E1216 12:15:04.625705 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.625941 kubelet[3702]: W1216 12:15:04.625910 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.626223 kubelet[3702]: E1216 12:15:04.626072 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.626433 kubelet[3702]: E1216 12:15:04.626420 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.626658 kubelet[3702]: W1216 12:15:04.626539 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.627083 kubelet[3702]: E1216 12:15:04.626725 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.627216 kubelet[3702]: E1216 12:15:04.627204 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.627633 kubelet[3702]: W1216 12:15:04.627264 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.627718 kubelet[3702]: E1216 12:15:04.627704 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.627978 kubelet[3702]: E1216 12:15:04.627966 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.628121 kubelet[3702]: W1216 12:15:04.628026 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.628121 kubelet[3702]: E1216 12:15:04.628039 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.628651 kubelet[3702]: E1216 12:15:04.628639 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.628700 kubelet[3702]: W1216 12:15:04.628690 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.628824 kubelet[3702]: E1216 12:15:04.628735 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.628918 kubelet[3702]: E1216 12:15:04.628907 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.629190 kubelet[3702]: W1216 12:15:04.629172 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.629502 kubelet[3702]: E1216 12:15:04.629246 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.629966 kubelet[3702]: E1216 12:15:04.629620 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.630052 kubelet[3702]: W1216 12:15:04.630039 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.630105 kubelet[3702]: E1216 12:15:04.630095 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.630674 kubelet[3702]: E1216 12:15:04.630359 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.630674 kubelet[3702]: W1216 12:15:04.630569 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.630674 kubelet[3702]: E1216 12:15:04.630586 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.631004 kubelet[3702]: E1216 12:15:04.630977 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.631159 kubelet[3702]: W1216 12:15:04.631144 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.631335 kubelet[3702]: E1216 12:15:04.631225 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.631543 kubelet[3702]: E1216 12:15:04.631531 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.631596 kubelet[3702]: W1216 12:15:04.631587 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.631749 kubelet[3702]: E1216 12:15:04.631631 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.631922 kubelet[3702]: E1216 12:15:04.631912 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.632090 kubelet[3702]: W1216 12:15:04.631965 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.632090 kubelet[3702]: E1216 12:15:04.631978 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.632213 kubelet[3702]: E1216 12:15:04.632203 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.632266 kubelet[3702]: W1216 12:15:04.632257 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.632349 kubelet[3702]: E1216 12:15:04.632329 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.633080 kubelet[3702]: E1216 12:15:04.633049 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.633080 kubelet[3702]: W1216 12:15:04.633067 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.633080 kubelet[3702]: E1216 12:15:04.633079 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.633338 kubelet[3702]: E1216 12:15:04.633321 3702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:15:04.633338 kubelet[3702]: W1216 12:15:04.633334 3702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:15:04.633382 kubelet[3702]: E1216 12:15:04.633344 3702 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:15:04.885943 containerd[2156]: time="2025-12-16T12:15:04.885803474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:04.949548 containerd[2156]: time="2025-12-16T12:15:04.949438051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=741" Dec 16 12:15:04.952954 containerd[2156]: time="2025-12-16T12:15:04.952898878Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:04.956447 containerd[2156]: time="2025-12-16T12:15:04.956406007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:04.956786 containerd[2156]: time="2025-12-16T12:15:04.956684571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.403379257s" Dec 16 12:15:04.956786 containerd[2156]: time="2025-12-16T12:15:04.956712893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:15:04.964411 containerd[2156]: time="2025-12-16T12:15:04.964352093Z" level=info msg="CreateContainer within sandbox \"7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:15:04.987815 containerd[2156]: time="2025-12-16T12:15:04.987094017Z" level=info msg="Container eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:04.990660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409833145.mount: Deactivated successfully. Dec 16 12:15:05.005280 containerd[2156]: time="2025-12-16T12:15:05.005240481Z" level=info msg="CreateContainer within sandbox \"7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a\"" Dec 16 12:15:05.005948 containerd[2156]: time="2025-12-16T12:15:05.005923341Z" level=info msg="StartContainer for \"eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a\"" Dec 16 12:15:05.007206 containerd[2156]: time="2025-12-16T12:15:05.007180491Z" level=info msg="connecting to shim eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a" address="unix:///run/containerd/s/38ab9c5ccefe7a58693f9e4388450dedc2a7ba2356c3cc11a96ba45a8157e4c0" protocol=ttrpc version=3 Dec 16 12:15:05.024741 systemd[1]: Started cri-containerd-eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a.scope - libcontainer container eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a. Dec 16 12:15:05.062000 audit: BPF prog-id=190 op=LOAD Dec 16 12:15:05.067007 kernel: kauditd_printk_skb: 74 callbacks suppressed Dec 16 12:15:05.067077 kernel: audit: type=1334 audit(1765887305.062:584): prog-id=190 op=LOAD Dec 16 12:15:05.062000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4243 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:05.102508 kernel: audit: type=1300 audit(1765887305.062:584): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4243 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:05.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561646364396530393562303930396362383363383462636161313438 Dec 16 12:15:05.121971 kernel: audit: type=1327 audit(1765887305.062:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561646364396530393562303930396362383363383462636161313438 Dec 16 12:15:05.062000 audit: BPF prog-id=191 op=LOAD Dec 16 12:15:05.127933 kernel: audit: type=1334 audit(1765887305.062:585): prog-id=191 op=LOAD Dec 16 12:15:05.062000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4243 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:05.147480 kernel: audit: type=1300 audit(1765887305.062:585): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4243 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:05.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561646364396530393562303930396362383363383462636161313438 Dec 16 12:15:05.167228 kernel: audit: type=1327 audit(1765887305.062:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561646364396530393562303930396362383363383462636161313438 Dec 16 12:15:05.065000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:15:05.172615 kernel: audit: type=1334 audit(1765887305.065:586): prog-id=191 op=UNLOAD Dec 16 12:15:05.065000 audit[4368]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:05.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561646364396530393562303930396362383363383462636161313438 Dec 16 12:15:05.208572 kernel: audit: type=1300 audit(1765887305.065:586): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:05.208668 kernel: audit: type=1327 audit(1765887305.065:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561646364396530393562303930396362383363383462636161313438 Dec 16 12:15:05.065000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:15:05.213741 kernel: audit: type=1334 audit(1765887305.065:587): prog-id=190 op=UNLOAD Dec 16 12:15:05.065000 audit[4368]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:05.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561646364396530393562303930396362383363383462636161313438 Dec 16 12:15:05.065000 audit: BPF prog-id=192 op=LOAD Dec 16 12:15:05.065000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4243 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:05.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561646364396530393562303930396362383363383462636161313438 Dec 16 12:15:05.222665 containerd[2156]: time="2025-12-16T12:15:05.222581879Z" level=info msg="StartContainer for \"eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a\" returns successfully" Dec 16 12:15:05.231164 systemd[1]: cri-containerd-eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a.scope: Deactivated successfully. Dec 16 12:15:05.234000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:15:05.236843 containerd[2156]: time="2025-12-16T12:15:05.236756682Z" level=info msg="received container exit event container_id:\"eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a\" id:\"eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a\" pid:4381 exited_at:{seconds:1765887305 nanos:236004968}" Dec 16 12:15:05.255544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eadcd9e095b0909cb83c84bcaa14892b1604daac4480d48ffaedd368cc96856a-rootfs.mount: Deactivated successfully. Dec 16 12:15:05.482148 kubelet[3702]: E1216 12:15:05.481334 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:05.547507 kubelet[3702]: I1216 12:15:05.547322 3702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:15:05.562603 kubelet[3702]: I1216 12:15:05.562543 3702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f7b4b5bff-9r98t" podStartSLOduration=3.326375692 podStartE2EDuration="5.562527768s" podCreationTimestamp="2025-12-16 12:15:00 +0000 UTC" firstStartedPulling="2025-12-16 12:15:01.316491699 +0000 UTC m=+22.923108954" lastFinishedPulling="2025-12-16 12:15:03.552643783 +0000 UTC m=+25.159261030" observedRunningTime="2025-12-16 12:15:04.597718277 +0000 UTC m=+26.204335524" watchObservedRunningTime="2025-12-16 12:15:05.562527768 +0000 UTC m=+27.169145015" Dec 16 12:15:06.553302 containerd[2156]: time="2025-12-16T12:15:06.553251426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:15:07.480638 kubelet[3702]: E1216 12:15:07.480542 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:09.480696 kubelet[3702]: E1216 12:15:09.480640 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:09.754792 containerd[2156]: time="2025-12-16T12:15:09.754181518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:09.758502 containerd[2156]: time="2025-12-16T12:15:09.758361554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:15:09.762790 containerd[2156]: time="2025-12-16T12:15:09.762637807Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:09.766930 containerd[2156]: time="2025-12-16T12:15:09.766884041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:09.767369 containerd[2156]: time="2025-12-16T12:15:09.767341940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.213664239s" Dec 16 12:15:09.767453 containerd[2156]: time="2025-12-16T12:15:09.767441462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:15:09.776111 containerd[2156]: time="2025-12-16T12:15:09.776080592Z" level=info msg="CreateContainer within sandbox \"7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:15:09.798642 containerd[2156]: time="2025-12-16T12:15:09.798604309Z" level=info msg="Container 59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:09.837930 containerd[2156]: time="2025-12-16T12:15:09.837887166Z" level=info msg="CreateContainer within sandbox \"7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060\"" Dec 16 12:15:09.839974 containerd[2156]: time="2025-12-16T12:15:09.838824959Z" level=info msg="StartContainer for \"59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060\"" Dec 16 12:15:09.839974 containerd[2156]: time="2025-12-16T12:15:09.839861209Z" level=info msg="connecting to shim 59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060" address="unix:///run/containerd/s/38ab9c5ccefe7a58693f9e4388450dedc2a7ba2356c3cc11a96ba45a8157e4c0" protocol=ttrpc version=3 Dec 16 12:15:09.859646 systemd[1]: Started cri-containerd-59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060.scope - libcontainer container 59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060. Dec 16 12:15:09.903000 audit: BPF prog-id=193 op=LOAD Dec 16 12:15:09.903000 audit[4425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4243 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:09.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663034363235313561626335383133393932326462346366333538 Dec 16 12:15:09.904000 audit: BPF prog-id=194 op=LOAD Dec 16 12:15:09.904000 audit[4425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4243 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:09.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663034363235313561626335383133393932326462346366333538 Dec 16 12:15:09.904000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:15:09.904000 audit[4425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:09.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663034363235313561626335383133393932326462346366333538 Dec 16 12:15:09.904000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:15:09.904000 audit[4425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:09.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663034363235313561626335383133393932326462346366333538 Dec 16 12:15:09.904000 audit: BPF prog-id=195 op=LOAD Dec 16 12:15:09.904000 audit[4425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4243 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:09.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663034363235313561626335383133393932326462346366333538 Dec 16 12:15:09.928252 containerd[2156]: time="2025-12-16T12:15:09.928200224Z" level=info msg="StartContainer for \"59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060\" returns successfully" Dec 16 12:15:11.480974 kubelet[3702]: E1216 12:15:11.480923 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:11.524963 containerd[2156]: time="2025-12-16T12:15:11.524919051Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:15:11.526607 systemd[1]: cri-containerd-59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060.scope: Deactivated successfully. Dec 16 12:15:11.527154 systemd[1]: cri-containerd-59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060.scope: Consumed 328ms CPU time, 193.4M memory peak, 165.9M written to disk. Dec 16 12:15:11.528458 containerd[2156]: time="2025-12-16T12:15:11.528412838Z" level=info msg="received container exit event container_id:\"59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060\" id:\"59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060\" pid:4437 exited_at:{seconds:1765887311 nanos:527335920}" Dec 16 12:15:11.538551 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 16 12:15:11.538675 kernel: audit: type=1334 audit(1765887311.531:595): prog-id=195 op=UNLOAD Dec 16 12:15:11.531000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:15:11.553188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59f0462515abc58139922db4cf3584520ea61cca5ed4145a6b76256626505060-rootfs.mount: Deactivated successfully. Dec 16 12:15:11.599963 kubelet[3702]: I1216 12:15:11.599925 3702 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:15:11.907544 systemd[1]: Created slice kubepods-besteffort-pod7de63821_b623_478a_a40e_6502071e35ea.slice - libcontainer container kubepods-besteffort-pod7de63821_b623_478a_a40e_6502071e35ea.slice. Dec 16 12:15:11.914630 systemd[1]: Created slice kubepods-burstable-podc05ee2d8_b800_46f3_8cb2_2945d50dda66.slice - libcontainer container kubepods-burstable-podc05ee2d8_b800_46f3_8cb2_2945d50dda66.slice. Dec 16 12:15:11.930788 systemd[1]: Created slice kubepods-burstable-pod8828f39e_aa8e_4fd0_b7c6_7777cb3950e2.slice - libcontainer container kubepods-burstable-pod8828f39e_aa8e_4fd0_b7c6_7777cb3950e2.slice. Dec 16 12:15:11.935681 systemd[1]: Created slice kubepods-besteffort-podcf43de7f_9014_448e_837a_134a14319b6e.slice - libcontainer container kubepods-besteffort-podcf43de7f_9014_448e_837a_134a14319b6e.slice. Dec 16 12:15:11.947004 systemd[1]: Created slice kubepods-besteffort-podc365101f_0c2a_4266_abb7_2136287ff3ab.slice - libcontainer container kubepods-besteffort-podc365101f_0c2a_4266_abb7_2136287ff3ab.slice. Dec 16 12:15:11.953362 systemd[1]: Created slice kubepods-besteffort-pod94cf7c82_e4b2_4a6c_9fc8_83e906d8394a.slice - libcontainer container kubepods-besteffort-pod94cf7c82_e4b2_4a6c_9fc8_83e906d8394a.slice. Dec 16 12:15:11.958431 systemd[1]: Created slice kubepods-besteffort-pode0f8af17_5d0e_41d3_8143_e682bcff58c4.slice - libcontainer container kubepods-besteffort-pode0f8af17_5d0e_41d3_8143_e682bcff58c4.slice. Dec 16 12:15:12.064629 kubelet[3702]: I1216 12:15:12.064542 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgdkk\" (UniqueName: \"kubernetes.io/projected/c365101f-0c2a-4266-abb7-2136287ff3ab-kube-api-access-cgdkk\") pod \"goldmane-666569f655-7x78m\" (UID: \"c365101f-0c2a-4266-abb7-2136287ff3ab\") " pod="calico-system/goldmane-666569f655-7x78m" Dec 16 12:15:12.064629 kubelet[3702]: I1216 12:15:12.064589 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf43de7f-9014-448e-837a-134a14319b6e-whisker-ca-bundle\") pod \"whisker-6c89b854b6-fkzll\" (UID: \"cf43de7f-9014-448e-837a-134a14319b6e\") " pod="calico-system/whisker-6c89b854b6-fkzll" Dec 16 12:15:12.064818 kubelet[3702]: I1216 12:15:12.064633 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltkg\" (UniqueName: \"kubernetes.io/projected/94cf7c82-e4b2-4a6c-9fc8-83e906d8394a-kube-api-access-7ltkg\") pod \"calico-apiserver-74755c97b7-lhmmb\" (UID: \"94cf7c82-e4b2-4a6c-9fc8-83e906d8394a\") " pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" Dec 16 12:15:12.064818 kubelet[3702]: I1216 12:15:12.064685 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c365101f-0c2a-4266-abb7-2136287ff3ab-goldmane-key-pair\") pod \"goldmane-666569f655-7x78m\" (UID: \"c365101f-0c2a-4266-abb7-2136287ff3ab\") " pod="calico-system/goldmane-666569f655-7x78m" Dec 16 12:15:12.064818 kubelet[3702]: I1216 12:15:12.064695 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdf8\" (UniqueName: \"kubernetes.io/projected/cf43de7f-9014-448e-837a-134a14319b6e-kube-api-access-vmdf8\") pod \"whisker-6c89b854b6-fkzll\" (UID: \"cf43de7f-9014-448e-837a-134a14319b6e\") " pod="calico-system/whisker-6c89b854b6-fkzll" Dec 16 12:15:12.064818 kubelet[3702]: I1216 12:15:12.064708 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8828f39e-aa8e-4fd0-b7c6-7777cb3950e2-config-volume\") pod \"coredns-674b8bbfcf-gfrxj\" (UID: \"8828f39e-aa8e-4fd0-b7c6-7777cb3950e2\") " pod="kube-system/coredns-674b8bbfcf-gfrxj" Dec 16 12:15:12.064818 kubelet[3702]: I1216 12:15:12.064730 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5cv8\" (UniqueName: \"kubernetes.io/projected/c05ee2d8-b800-46f3-8cb2-2945d50dda66-kube-api-access-k5cv8\") pod \"coredns-674b8bbfcf-dcxhs\" (UID: \"c05ee2d8-b800-46f3-8cb2-2945d50dda66\") " pod="kube-system/coredns-674b8bbfcf-dcxhs" Dec 16 12:15:12.064956 kubelet[3702]: I1216 12:15:12.064740 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf43de7f-9014-448e-837a-134a14319b6e-whisker-backend-key-pair\") pod \"whisker-6c89b854b6-fkzll\" (UID: \"cf43de7f-9014-448e-837a-134a14319b6e\") " pod="calico-system/whisker-6c89b854b6-fkzll" Dec 16 12:15:12.064956 kubelet[3702]: I1216 12:15:12.064754 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c365101f-0c2a-4266-abb7-2136287ff3ab-goldmane-ca-bundle\") pod \"goldmane-666569f655-7x78m\" (UID: \"c365101f-0c2a-4266-abb7-2136287ff3ab\") " pod="calico-system/goldmane-666569f655-7x78m" Dec 16 12:15:12.064956 kubelet[3702]: I1216 12:15:12.064770 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7de63821-b623-478a-a40e-6502071e35ea-tigera-ca-bundle\") pod \"calico-kube-controllers-547b944668-qhgzg\" (UID: \"7de63821-b623-478a-a40e-6502071e35ea\") " pod="calico-system/calico-kube-controllers-547b944668-qhgzg" Dec 16 12:15:12.064956 kubelet[3702]: I1216 12:15:12.064781 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fblkr\" (UniqueName: \"kubernetes.io/projected/7de63821-b623-478a-a40e-6502071e35ea-kube-api-access-fblkr\") pod \"calico-kube-controllers-547b944668-qhgzg\" (UID: \"7de63821-b623-478a-a40e-6502071e35ea\") " pod="calico-system/calico-kube-controllers-547b944668-qhgzg" Dec 16 12:15:12.064956 kubelet[3702]: I1216 12:15:12.064793 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kd5l\" (UniqueName: \"kubernetes.io/projected/8828f39e-aa8e-4fd0-b7c6-7777cb3950e2-kube-api-access-5kd5l\") pod \"coredns-674b8bbfcf-gfrxj\" (UID: \"8828f39e-aa8e-4fd0-b7c6-7777cb3950e2\") " pod="kube-system/coredns-674b8bbfcf-gfrxj" Dec 16 12:15:12.065036 kubelet[3702]: I1216 12:15:12.064811 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c05ee2d8-b800-46f3-8cb2-2945d50dda66-config-volume\") pod \"coredns-674b8bbfcf-dcxhs\" (UID: \"c05ee2d8-b800-46f3-8cb2-2945d50dda66\") " pod="kube-system/coredns-674b8bbfcf-dcxhs" Dec 16 12:15:12.065036 kubelet[3702]: I1216 12:15:12.064833 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl6j\" (UniqueName: \"kubernetes.io/projected/e0f8af17-5d0e-41d3-8143-e682bcff58c4-kube-api-access-8jl6j\") pod \"calico-apiserver-74755c97b7-c55sp\" (UID: \"e0f8af17-5d0e-41d3-8143-e682bcff58c4\") " pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" Dec 16 12:15:12.065036 kubelet[3702]: I1216 12:15:12.064847 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c365101f-0c2a-4266-abb7-2136287ff3ab-config\") pod \"goldmane-666569f655-7x78m\" (UID: \"c365101f-0c2a-4266-abb7-2136287ff3ab\") " pod="calico-system/goldmane-666569f655-7x78m" Dec 16 12:15:12.065036 kubelet[3702]: I1216 12:15:12.064858 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94cf7c82-e4b2-4a6c-9fc8-83e906d8394a-calico-apiserver-certs\") pod \"calico-apiserver-74755c97b7-lhmmb\" (UID: \"94cf7c82-e4b2-4a6c-9fc8-83e906d8394a\") " pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" Dec 16 12:15:12.065036 kubelet[3702]: I1216 12:15:12.064870 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0f8af17-5d0e-41d3-8143-e682bcff58c4-calico-apiserver-certs\") pod \"calico-apiserver-74755c97b7-c55sp\" (UID: \"e0f8af17-5d0e-41d3-8143-e682bcff58c4\") " pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" Dec 16 12:15:12.221363 containerd[2156]: time="2025-12-16T12:15:12.221257426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547b944668-qhgzg,Uid:7de63821-b623-478a-a40e-6502071e35ea,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:12.225526 containerd[2156]: time="2025-12-16T12:15:12.225495843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcxhs,Uid:c05ee2d8-b800-46f3-8cb2-2945d50dda66,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:12.247582 containerd[2156]: time="2025-12-16T12:15:12.247533434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c89b854b6-fkzll,Uid:cf43de7f-9014-448e-837a-134a14319b6e,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:12.248035 containerd[2156]: time="2025-12-16T12:15:12.247945057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfrxj,Uid:8828f39e-aa8e-4fd0-b7c6-7777cb3950e2,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:12.252600 containerd[2156]: time="2025-12-16T12:15:12.252570871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7x78m,Uid:c365101f-0c2a-4266-abb7-2136287ff3ab,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:12.256689 containerd[2156]: time="2025-12-16T12:15:12.256650394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74755c97b7-lhmmb,Uid:94cf7c82-e4b2-4a6c-9fc8-83e906d8394a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:15:12.263133 containerd[2156]: time="2025-12-16T12:15:12.263097380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74755c97b7-c55sp,Uid:e0f8af17-5d0e-41d3-8143-e682bcff58c4,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:15:12.287647 containerd[2156]: time="2025-12-16T12:15:12.287600557Z" level=error msg="Failed to destroy network for sandbox \"181d332ab39b70967f1cb2050e0ae108bfa0aab03e71a347104d83ec3a867f9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.305949 containerd[2156]: time="2025-12-16T12:15:12.305879616Z" level=error msg="Failed to destroy network for sandbox \"14fe915c00c74b54d82ea4bac11b09a996f107db3151a446e29e6cccda8de94b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.325449 containerd[2156]: time="2025-12-16T12:15:12.325226761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547b944668-qhgzg,Uid:7de63821-b623-478a-a40e-6502071e35ea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"181d332ab39b70967f1cb2050e0ae108bfa0aab03e71a347104d83ec3a867f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.325691 kubelet[3702]: E1216 12:15:12.325615 3702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181d332ab39b70967f1cb2050e0ae108bfa0aab03e71a347104d83ec3a867f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.325841 kubelet[3702]: E1216 12:15:12.325701 3702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181d332ab39b70967f1cb2050e0ae108bfa0aab03e71a347104d83ec3a867f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" Dec 16 12:15:12.325841 kubelet[3702]: E1216 12:15:12.325833 3702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181d332ab39b70967f1cb2050e0ae108bfa0aab03e71a347104d83ec3a867f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" Dec 16 12:15:12.326264 kubelet[3702]: E1216 12:15:12.325886 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-547b944668-qhgzg_calico-system(7de63821-b623-478a-a40e-6502071e35ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-547b944668-qhgzg_calico-system(7de63821-b623-478a-a40e-6502071e35ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"181d332ab39b70967f1cb2050e0ae108bfa0aab03e71a347104d83ec3a867f9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:15:12.330590 containerd[2156]: time="2025-12-16T12:15:12.330543048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcxhs,Uid:c05ee2d8-b800-46f3-8cb2-2945d50dda66,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14fe915c00c74b54d82ea4bac11b09a996f107db3151a446e29e6cccda8de94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.330982 kubelet[3702]: E1216 12:15:12.330728 3702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14fe915c00c74b54d82ea4bac11b09a996f107db3151a446e29e6cccda8de94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.330982 kubelet[3702]: E1216 12:15:12.330774 3702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14fe915c00c74b54d82ea4bac11b09a996f107db3151a446e29e6cccda8de94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dcxhs" Dec 16 12:15:12.330982 kubelet[3702]: E1216 12:15:12.330788 3702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14fe915c00c74b54d82ea4bac11b09a996f107db3151a446e29e6cccda8de94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dcxhs" Dec 16 12:15:12.331066 kubelet[3702]: E1216 12:15:12.330823 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dcxhs_kube-system(c05ee2d8-b800-46f3-8cb2-2945d50dda66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dcxhs_kube-system(c05ee2d8-b800-46f3-8cb2-2945d50dda66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14fe915c00c74b54d82ea4bac11b09a996f107db3151a446e29e6cccda8de94b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dcxhs" podUID="c05ee2d8-b800-46f3-8cb2-2945d50dda66" Dec 16 12:15:12.364413 containerd[2156]: time="2025-12-16T12:15:12.364256306Z" level=error msg="Failed to destroy network for sandbox \"1c526eb5652126ed6c07f430ea64e320fd80573b5aadc1f8c942c23ce8b13702\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.374425 containerd[2156]: time="2025-12-16T12:15:12.374358286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c89b854b6-fkzll,Uid:cf43de7f-9014-448e-837a-134a14319b6e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c526eb5652126ed6c07f430ea64e320fd80573b5aadc1f8c942c23ce8b13702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.375096 kubelet[3702]: E1216 12:15:12.374823 3702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c526eb5652126ed6c07f430ea64e320fd80573b5aadc1f8c942c23ce8b13702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.375096 kubelet[3702]: E1216 12:15:12.375031 3702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c526eb5652126ed6c07f430ea64e320fd80573b5aadc1f8c942c23ce8b13702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c89b854b6-fkzll" Dec 16 12:15:12.375096 kubelet[3702]: E1216 12:15:12.375053 3702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c526eb5652126ed6c07f430ea64e320fd80573b5aadc1f8c942c23ce8b13702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c89b854b6-fkzll" Dec 16 12:15:12.375294 kubelet[3702]: E1216 12:15:12.375272 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c89b854b6-fkzll_calico-system(cf43de7f-9014-448e-837a-134a14319b6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c89b854b6-fkzll_calico-system(cf43de7f-9014-448e-837a-134a14319b6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c526eb5652126ed6c07f430ea64e320fd80573b5aadc1f8c942c23ce8b13702\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c89b854b6-fkzll" podUID="cf43de7f-9014-448e-837a-134a14319b6e" Dec 16 12:15:12.390062 containerd[2156]: time="2025-12-16T12:15:12.390021026Z" level=error msg="Failed to destroy network for sandbox \"142693491168a174ee9f4914b3a779ed6eedb5dd97fc452fc11adce73bf1c732\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.391558 containerd[2156]: time="2025-12-16T12:15:12.391523800Z" level=error msg="Failed to destroy network for sandbox \"f048689dda7128e2b60732864b44374d3965cadebb2737e0e6b6c9288bd76a9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.393252 containerd[2156]: time="2025-12-16T12:15:12.393215352Z" level=error msg="Failed to destroy network for sandbox \"7b029c1693b88e70cf0b0fc9e37d090201a23bb178d20a1278e936b4b1914aa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.400954 containerd[2156]: time="2025-12-16T12:15:12.400460511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfrxj,Uid:8828f39e-aa8e-4fd0-b7c6-7777cb3950e2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f048689dda7128e2b60732864b44374d3965cadebb2737e0e6b6c9288bd76a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.401067 kubelet[3702]: E1216 12:15:12.400839 3702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f048689dda7128e2b60732864b44374d3965cadebb2737e0e6b6c9288bd76a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.401067 kubelet[3702]: E1216 12:15:12.400893 3702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f048689dda7128e2b60732864b44374d3965cadebb2737e0e6b6c9288bd76a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gfrxj" Dec 16 12:15:12.401067 kubelet[3702]: E1216 12:15:12.400909 3702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f048689dda7128e2b60732864b44374d3965cadebb2737e0e6b6c9288bd76a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gfrxj" Dec 16 12:15:12.401152 kubelet[3702]: E1216 12:15:12.400949 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gfrxj_kube-system(8828f39e-aa8e-4fd0-b7c6-7777cb3950e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gfrxj_kube-system(8828f39e-aa8e-4fd0-b7c6-7777cb3950e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f048689dda7128e2b60732864b44374d3965cadebb2737e0e6b6c9288bd76a9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gfrxj" podUID="8828f39e-aa8e-4fd0-b7c6-7777cb3950e2" Dec 16 12:15:12.405733 containerd[2156]: time="2025-12-16T12:15:12.405698615Z" level=error msg="Failed to destroy network for sandbox \"0ba13da7784bfca10ce6814399766bc0cfc4448346f5fe0553c2b19c8d9a1670\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.407358 containerd[2156]: time="2025-12-16T12:15:12.407324825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74755c97b7-lhmmb,Uid:94cf7c82-e4b2-4a6c-9fc8-83e906d8394a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"142693491168a174ee9f4914b3a779ed6eedb5dd97fc452fc11adce73bf1c732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.407794 kubelet[3702]: E1216 12:15:12.407759 3702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"142693491168a174ee9f4914b3a779ed6eedb5dd97fc452fc11adce73bf1c732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.407884 kubelet[3702]: E1216 12:15:12.407807 3702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"142693491168a174ee9f4914b3a779ed6eedb5dd97fc452fc11adce73bf1c732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" Dec 16 12:15:12.407884 kubelet[3702]: E1216 12:15:12.407827 3702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"142693491168a174ee9f4914b3a779ed6eedb5dd97fc452fc11adce73bf1c732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" Dec 16 12:15:12.407951 kubelet[3702]: E1216 12:15:12.407876 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74755c97b7-lhmmb_calico-apiserver(94cf7c82-e4b2-4a6c-9fc8-83e906d8394a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74755c97b7-lhmmb_calico-apiserver(94cf7c82-e4b2-4a6c-9fc8-83e906d8394a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"142693491168a174ee9f4914b3a779ed6eedb5dd97fc452fc11adce73bf1c732\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:15:12.410568 containerd[2156]: time="2025-12-16T12:15:12.410534809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7x78m,Uid:c365101f-0c2a-4266-abb7-2136287ff3ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b029c1693b88e70cf0b0fc9e37d090201a23bb178d20a1278e936b4b1914aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.411265 kubelet[3702]: E1216 12:15:12.411233 3702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b029c1693b88e70cf0b0fc9e37d090201a23bb178d20a1278e936b4b1914aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.411494 kubelet[3702]: E1216 12:15:12.411423 3702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b029c1693b88e70cf0b0fc9e37d090201a23bb178d20a1278e936b4b1914aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7x78m" Dec 16 12:15:12.411494 kubelet[3702]: E1216 12:15:12.411445 3702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b029c1693b88e70cf0b0fc9e37d090201a23bb178d20a1278e936b4b1914aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7x78m" Dec 16 12:15:12.411615 kubelet[3702]: E1216 12:15:12.411595 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7x78m_calico-system(c365101f-0c2a-4266-abb7-2136287ff3ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7x78m_calico-system(c365101f-0c2a-4266-abb7-2136287ff3ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b029c1693b88e70cf0b0fc9e37d090201a23bb178d20a1278e936b4b1914aa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:15:12.417494 containerd[2156]: time="2025-12-16T12:15:12.417412460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74755c97b7-c55sp,Uid:e0f8af17-5d0e-41d3-8143-e682bcff58c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ba13da7784bfca10ce6814399766bc0cfc4448346f5fe0553c2b19c8d9a1670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.417821 kubelet[3702]: E1216 12:15:12.417759 3702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ba13da7784bfca10ce6814399766bc0cfc4448346f5fe0553c2b19c8d9a1670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:12.417821 kubelet[3702]: E1216 12:15:12.417797 3702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ba13da7784bfca10ce6814399766bc0cfc4448346f5fe0553c2b19c8d9a1670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" Dec 16 12:15:12.417980 kubelet[3702]: E1216 12:15:12.417810 3702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ba13da7784bfca10ce6814399766bc0cfc4448346f5fe0553c2b19c8d9a1670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" Dec 16 12:15:12.418039 kubelet[3702]: E1216 12:15:12.417957 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74755c97b7-c55sp_calico-apiserver(e0f8af17-5d0e-41d3-8143-e682bcff58c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74755c97b7-c55sp_calico-apiserver(e0f8af17-5d0e-41d3-8143-e682bcff58c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ba13da7784bfca10ce6814399766bc0cfc4448346f5fe0553c2b19c8d9a1670\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:15:12.567916 containerd[2156]: time="2025-12-16T12:15:12.567412675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:15:13.485420 systemd[1]: Created slice kubepods-besteffort-pod77a7712d_2394_4a4f_8873_2dd27305d176.slice - libcontainer container kubepods-besteffort-pod77a7712d_2394_4a4f_8873_2dd27305d176.slice. Dec 16 12:15:13.487971 containerd[2156]: time="2025-12-16T12:15:13.487913338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lrs86,Uid:77a7712d-2394-4a4f-8873-2dd27305d176,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:13.531714 containerd[2156]: time="2025-12-16T12:15:13.531585122Z" level=error msg="Failed to destroy network for sandbox \"50bdcaa6c8061340b0acc243ee6cfe1fdc511b1986d73f6c4c06840ab77ee948\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:13.533494 systemd[1]: run-netns-cni\x2d8e9aee39\x2da172\x2dea48\x2d3ccd\x2d685e15d81b36.mount: Deactivated successfully. Dec 16 12:15:13.543078 containerd[2156]: time="2025-12-16T12:15:13.542970838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lrs86,Uid:77a7712d-2394-4a4f-8873-2dd27305d176,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50bdcaa6c8061340b0acc243ee6cfe1fdc511b1986d73f6c4c06840ab77ee948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:13.543308 kubelet[3702]: E1216 12:15:13.543193 3702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50bdcaa6c8061340b0acc243ee6cfe1fdc511b1986d73f6c4c06840ab77ee948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:15:13.543308 kubelet[3702]: E1216 12:15:13.543248 3702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50bdcaa6c8061340b0acc243ee6cfe1fdc511b1986d73f6c4c06840ab77ee948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lrs86" Dec 16 12:15:13.543308 kubelet[3702]: E1216 12:15:13.543264 3702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50bdcaa6c8061340b0acc243ee6cfe1fdc511b1986d73f6c4c06840ab77ee948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lrs86" Dec 16 12:15:13.543799 kubelet[3702]: E1216 12:15:13.543302 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50bdcaa6c8061340b0acc243ee6cfe1fdc511b1986d73f6c4c06840ab77ee948\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:18.313107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825692132.mount: Deactivated successfully. Dec 16 12:15:18.767906 containerd[2156]: time="2025-12-16T12:15:18.767846009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:18.771039 containerd[2156]: time="2025-12-16T12:15:18.770994842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:15:18.773916 containerd[2156]: time="2025-12-16T12:15:18.773887783Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:18.779005 containerd[2156]: time="2025-12-16T12:15:18.778975592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:18.779605 containerd[2156]: time="2025-12-16T12:15:18.779416356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.211182779s" Dec 16 12:15:18.779605 containerd[2156]: time="2025-12-16T12:15:18.779443862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:15:18.808140 containerd[2156]: time="2025-12-16T12:15:18.808094873Z" level=info msg="CreateContainer within sandbox \"7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:15:18.833720 containerd[2156]: time="2025-12-16T12:15:18.832800784Z" level=info msg="Container 6d99204f380caa3b434f640891f00dcf20072f9dee6bdd138348977e592d439a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:18.855738 containerd[2156]: time="2025-12-16T12:15:18.855695842Z" level=info msg="CreateContainer within sandbox \"7790cb384229c527ea365490c3ecc7c33441af8d48304dee175e550bbb0f2276\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6d99204f380caa3b434f640891f00dcf20072f9dee6bdd138348977e592d439a\"" Dec 16 12:15:18.856551 containerd[2156]: time="2025-12-16T12:15:18.856404076Z" level=info msg="StartContainer for \"6d99204f380caa3b434f640891f00dcf20072f9dee6bdd138348977e592d439a\"" Dec 16 12:15:18.857733 containerd[2156]: time="2025-12-16T12:15:18.857714880Z" level=info msg="connecting to shim 6d99204f380caa3b434f640891f00dcf20072f9dee6bdd138348977e592d439a" address="unix:///run/containerd/s/38ab9c5ccefe7a58693f9e4388450dedc2a7ba2356c3cc11a96ba45a8157e4c0" protocol=ttrpc version=3 Dec 16 12:15:18.877674 systemd[1]: Started cri-containerd-6d99204f380caa3b434f640891f00dcf20072f9dee6bdd138348977e592d439a.scope - libcontainer container 6d99204f380caa3b434f640891f00dcf20072f9dee6bdd138348977e592d439a. Dec 16 12:15:18.930000 audit: BPF prog-id=196 op=LOAD Dec 16 12:15:18.930000 audit[4696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4243 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:18.952421 kernel: audit: type=1334 audit(1765887318.930:596): prog-id=196 op=LOAD Dec 16 12:15:18.952529 kernel: audit: type=1300 audit(1765887318.930:596): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4243 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:18.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664393932303466333830636161336234333466363430383931663030 Dec 16 12:15:18.969951 kernel: audit: type=1327 audit(1765887318.930:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664393932303466333830636161336234333466363430383931663030 Dec 16 12:15:18.930000 audit: BPF prog-id=197 op=LOAD Dec 16 12:15:18.974717 kernel: audit: type=1334 audit(1765887318.930:597): prog-id=197 op=LOAD Dec 16 12:15:18.930000 audit[4696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4243 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:18.991268 kernel: audit: type=1300 audit(1765887318.930:597): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4243 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:18.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664393932303466333830636161336234333466363430383931663030 Dec 16 12:15:19.008395 kernel: audit: type=1327 audit(1765887318.930:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664393932303466333830636161336234333466363430383931663030 Dec 16 12:15:18.935000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:15:19.015089 kernel: audit: type=1334 audit(1765887318.935:598): prog-id=197 op=UNLOAD Dec 16 12:15:18.935000 audit[4696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:19.031397 kernel: audit: type=1300 audit(1765887318.935:598): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:18.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664393932303466333830636161336234333466363430383931663030 Dec 16 12:15:19.048972 kernel: audit: type=1327 audit(1765887318.935:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664393932303466333830636161336234333466363430383931663030 Dec 16 12:15:18.935000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:15:19.053502 kernel: audit: type=1334 audit(1765887318.935:599): prog-id=196 op=UNLOAD Dec 16 12:15:18.935000 audit[4696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4243 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:18.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664393932303466333830636161336234333466363430383931663030 Dec 16 12:15:18.935000 audit: BPF prog-id=198 op=LOAD Dec 16 12:15:18.935000 audit[4696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4243 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:18.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664393932303466333830636161336234333466363430383931663030 Dec 16 12:15:19.064801 containerd[2156]: time="2025-12-16T12:15:19.064751921Z" level=info msg="StartContainer for \"6d99204f380caa3b434f640891f00dcf20072f9dee6bdd138348977e592d439a\" returns successfully" Dec 16 12:15:19.293414 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:15:19.293573 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:15:19.509195 kubelet[3702]: I1216 12:15:19.509160 3702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf43de7f-9014-448e-837a-134a14319b6e-whisker-backend-key-pair\") pod \"cf43de7f-9014-448e-837a-134a14319b6e\" (UID: \"cf43de7f-9014-448e-837a-134a14319b6e\") " Dec 16 12:15:19.509701 kubelet[3702]: I1216 12:15:19.509279 3702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf43de7f-9014-448e-837a-134a14319b6e-whisker-ca-bundle\") pod \"cf43de7f-9014-448e-837a-134a14319b6e\" (UID: \"cf43de7f-9014-448e-837a-134a14319b6e\") " Dec 16 12:15:19.509701 kubelet[3702]: I1216 12:15:19.509297 3702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmdf8\" (UniqueName: \"kubernetes.io/projected/cf43de7f-9014-448e-837a-134a14319b6e-kube-api-access-vmdf8\") pod \"cf43de7f-9014-448e-837a-134a14319b6e\" (UID: \"cf43de7f-9014-448e-837a-134a14319b6e\") " Dec 16 12:15:19.512408 kubelet[3702]: I1216 12:15:19.512353 3702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf43de7f-9014-448e-837a-134a14319b6e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cf43de7f-9014-448e-837a-134a14319b6e" (UID: "cf43de7f-9014-448e-837a-134a14319b6e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:15:19.516090 kubelet[3702]: I1216 12:15:19.516032 3702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf43de7f-9014-448e-837a-134a14319b6e-kube-api-access-vmdf8" (OuterVolumeSpecName: "kube-api-access-vmdf8") pod "cf43de7f-9014-448e-837a-134a14319b6e" (UID: "cf43de7f-9014-448e-837a-134a14319b6e"). InnerVolumeSpecName "kube-api-access-vmdf8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:15:19.517877 systemd[1]: var-lib-kubelet-pods-cf43de7f\x2d9014\x2d448e\x2d837a\x2d134a14319b6e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvmdf8.mount: Deactivated successfully. Dec 16 12:15:19.524720 kubelet[3702]: I1216 12:15:19.522153 3702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf43de7f-9014-448e-837a-134a14319b6e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cf43de7f-9014-448e-837a-134a14319b6e" (UID: "cf43de7f-9014-448e-837a-134a14319b6e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:15:19.522844 systemd[1]: var-lib-kubelet-pods-cf43de7f\x2d9014\x2d448e\x2d837a\x2d134a14319b6e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:15:19.587752 systemd[1]: Removed slice kubepods-besteffort-podcf43de7f_9014_448e_837a_134a14319b6e.slice - libcontainer container kubepods-besteffort-podcf43de7f_9014_448e_837a_134a14319b6e.slice. Dec 16 12:15:19.611987 kubelet[3702]: I1216 12:15:19.611675 3702 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf43de7f-9014-448e-837a-134a14319b6e-whisker-ca-bundle\") on node \"ci-4547.0.0-a-4d45b340a5\" DevicePath \"\"" Dec 16 12:15:19.611987 kubelet[3702]: I1216 12:15:19.611715 3702 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vmdf8\" (UniqueName: \"kubernetes.io/projected/cf43de7f-9014-448e-837a-134a14319b6e-kube-api-access-vmdf8\") on node \"ci-4547.0.0-a-4d45b340a5\" DevicePath \"\"" Dec 16 12:15:19.611987 kubelet[3702]: I1216 12:15:19.611722 3702 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf43de7f-9014-448e-837a-134a14319b6e-whisker-backend-key-pair\") on node \"ci-4547.0.0-a-4d45b340a5\" DevicePath \"\"" Dec 16 12:15:19.619251 kubelet[3702]: I1216 12:15:19.619145 3702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rqrnv" podStartSLOduration=1.317220266 podStartE2EDuration="18.619041127s" podCreationTimestamp="2025-12-16 12:15:01 +0000 UTC" firstStartedPulling="2025-12-16 12:15:01.478239332 +0000 UTC m=+23.084856587" lastFinishedPulling="2025-12-16 12:15:18.780060201 +0000 UTC m=+40.386677448" observedRunningTime="2025-12-16 12:15:19.603625344 +0000 UTC m=+41.210242599" watchObservedRunningTime="2025-12-16 12:15:19.619041127 +0000 UTC m=+41.225658374" Dec 16 12:15:19.677422 systemd[1]: Created slice kubepods-besteffort-pod738872a1_0466_4442_a71b_b4f7bae6b427.slice - libcontainer container kubepods-besteffort-pod738872a1_0466_4442_a71b_b4f7bae6b427.slice. Dec 16 12:15:19.712083 kubelet[3702]: I1216 12:15:19.712036 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/738872a1-0466-4442-a71b-b4f7bae6b427-whisker-ca-bundle\") pod \"whisker-98b9677f6-82g96\" (UID: \"738872a1-0466-4442-a71b-b4f7bae6b427\") " pod="calico-system/whisker-98b9677f6-82g96" Dec 16 12:15:19.712237 kubelet[3702]: I1216 12:15:19.712124 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/738872a1-0466-4442-a71b-b4f7bae6b427-whisker-backend-key-pair\") pod \"whisker-98b9677f6-82g96\" (UID: \"738872a1-0466-4442-a71b-b4f7bae6b427\") " pod="calico-system/whisker-98b9677f6-82g96" Dec 16 12:15:19.712237 kubelet[3702]: I1216 12:15:19.712136 3702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwb7n\" (UniqueName: \"kubernetes.io/projected/738872a1-0466-4442-a71b-b4f7bae6b427-kube-api-access-dwb7n\") pod \"whisker-98b9677f6-82g96\" (UID: \"738872a1-0466-4442-a71b-b4f7bae6b427\") " pod="calico-system/whisker-98b9677f6-82g96" Dec 16 12:15:19.983629 containerd[2156]: time="2025-12-16T12:15:19.983405488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98b9677f6-82g96,Uid:738872a1-0466-4442-a71b-b4f7bae6b427,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:20.099743 systemd-networkd[1754]: cali3b6472485fe: Link UP Dec 16 12:15:20.100676 systemd-networkd[1754]: cali3b6472485fe: Gained carrier Dec 16 12:15:20.128496 containerd[2156]: 2025-12-16 12:15:20.008 [INFO][4758] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:15:20.128496 containerd[2156]: 2025-12-16 12:15:20.039 [INFO][4758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0 whisker-98b9677f6- calico-system 738872a1-0466-4442-a71b-b4f7bae6b427 885 0 2025-12-16 12:15:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:98b9677f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-a-4d45b340a5 whisker-98b9677f6-82g96 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3b6472485fe [] [] }} ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Namespace="calico-system" Pod="whisker-98b9677f6-82g96" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-" Dec 16 12:15:20.128496 containerd[2156]: 2025-12-16 12:15:20.040 [INFO][4758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Namespace="calico-system" Pod="whisker-98b9677f6-82g96" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" Dec 16 12:15:20.128496 containerd[2156]: 2025-12-16 12:15:20.058 [INFO][4769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" HandleID="k8s-pod-network.f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Workload="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" Dec 16 12:15:20.128712 containerd[2156]: 2025-12-16 12:15:20.058 [INFO][4769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" HandleID="k8s-pod-network.f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Workload="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-4d45b340a5", "pod":"whisker-98b9677f6-82g96", "timestamp":"2025-12-16 12:15:20.058427102 +0000 UTC"}, Hostname:"ci-4547.0.0-a-4d45b340a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:20.128712 containerd[2156]: 2025-12-16 12:15:20.058 [INFO][4769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:20.128712 containerd[2156]: 2025-12-16 12:15:20.058 [INFO][4769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:20.128712 containerd[2156]: 2025-12-16 12:15:20.058 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-4d45b340a5' Dec 16 12:15:20.128712 containerd[2156]: 2025-12-16 12:15:20.064 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:20.128712 containerd[2156]: 2025-12-16 12:15:20.070 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:20.128712 containerd[2156]: 2025-12-16 12:15:20.075 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:20.128712 containerd[2156]: 2025-12-16 12:15:20.077 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:20.128712 containerd[2156]: 2025-12-16 12:15:20.079 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:20.128844 containerd[2156]: 2025-12-16 12:15:20.079 [INFO][4769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:20.128844 containerd[2156]: 2025-12-16 12:15:20.080 [INFO][4769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed Dec 16 12:15:20.128844 containerd[2156]: 2025-12-16 12:15:20.084 [INFO][4769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:20.128844 containerd[2156]: 2025-12-16 12:15:20.091 [INFO][4769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.129/26] block=192.168.65.128/26 handle="k8s-pod-network.f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:20.128844 containerd[2156]: 2025-12-16 12:15:20.091 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.129/26] handle="k8s-pod-network.f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:20.128844 containerd[2156]: 2025-12-16 12:15:20.091 [INFO][4769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:20.128844 containerd[2156]: 2025-12-16 12:15:20.092 [INFO][4769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.129/26] IPv6=[] ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" HandleID="k8s-pod-network.f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Workload="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" Dec 16 12:15:20.128936 containerd[2156]: 2025-12-16 12:15:20.094 [INFO][4758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Namespace="calico-system" Pod="whisker-98b9677f6-82g96" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0", GenerateName:"whisker-98b9677f6-", Namespace:"calico-system", SelfLink:"", UID:"738872a1-0466-4442-a71b-b4f7bae6b427", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"98b9677f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"", Pod:"whisker-98b9677f6-82g96", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b6472485fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:20.128936 containerd[2156]: 2025-12-16 12:15:20.094 [INFO][4758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.129/32] ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Namespace="calico-system" Pod="whisker-98b9677f6-82g96" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" Dec 16 12:15:20.128983 containerd[2156]: 2025-12-16 12:15:20.094 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b6472485fe ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Namespace="calico-system" Pod="whisker-98b9677f6-82g96" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" Dec 16 12:15:20.128983 containerd[2156]: 2025-12-16 12:15:20.099 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Namespace="calico-system" Pod="whisker-98b9677f6-82g96" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" Dec 16 12:15:20.129012 containerd[2156]: 2025-12-16 12:15:20.100 [INFO][4758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Namespace="calico-system" Pod="whisker-98b9677f6-82g96" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0", GenerateName:"whisker-98b9677f6-", Namespace:"calico-system", SelfLink:"", UID:"738872a1-0466-4442-a71b-b4f7bae6b427", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"98b9677f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed", Pod:"whisker-98b9677f6-82g96", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b6472485fe", MAC:"6a:12:7a:0d:01:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:20.129078 containerd[2156]: 2025-12-16 12:15:20.125 [INFO][4758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" Namespace="calico-system" Pod="whisker-98b9677f6-82g96" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-whisker--98b9677f6--82g96-eth0" Dec 16 12:15:20.169986 containerd[2156]: time="2025-12-16T12:15:20.169907086Z" level=info msg="connecting to shim f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed" address="unix:///run/containerd/s/44830dc1157c7decf32c46ee46106e5a36bcd7651610fc8f847f1dce9f4ca88e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:20.191686 systemd[1]: Started cri-containerd-f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed.scope - libcontainer container f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed. Dec 16 12:15:20.200000 audit: BPF prog-id=199 op=LOAD Dec 16 12:15:20.200000 audit: BPF prog-id=200 op=LOAD Dec 16 12:15:20.200000 audit[4803]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4792 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653230323037366265383562633033376263623130356466363366 Dec 16 12:15:20.201000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:15:20.201000 audit[4803]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4792 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653230323037366265383562633033376263623130356466363366 Dec 16 12:15:20.201000 audit: BPF prog-id=201 op=LOAD Dec 16 12:15:20.201000 audit[4803]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4792 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653230323037366265383562633033376263623130356466363366 Dec 16 12:15:20.201000 audit: BPF prog-id=202 op=LOAD Dec 16 12:15:20.201000 audit[4803]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4792 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653230323037366265383562633033376263623130356466363366 Dec 16 12:15:20.201000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:15:20.201000 audit[4803]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4792 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653230323037366265383562633033376263623130356466363366 Dec 16 12:15:20.201000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:15:20.201000 audit[4803]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4792 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653230323037366265383562633033376263623130356466363366 Dec 16 12:15:20.201000 audit: BPF prog-id=203 op=LOAD Dec 16 12:15:20.201000 audit[4803]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4792 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653230323037366265383562633033376263623130356466363366 Dec 16 12:15:20.229352 containerd[2156]: time="2025-12-16T12:15:20.229301950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98b9677f6-82g96,Uid:738872a1-0466-4442-a71b-b4f7bae6b427,Namespace:calico-system,Attempt:0,} returns sandbox id \"f0e202076be85bc037bcb105df63ff682ca25a936466f2852f617cebbcbd89ed\"" Dec 16 12:15:20.232746 containerd[2156]: time="2025-12-16T12:15:20.232706180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:15:20.483300 kubelet[3702]: I1216 12:15:20.483080 3702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf43de7f-9014-448e-837a-134a14319b6e" path="/var/lib/kubelet/pods/cf43de7f-9014-448e-837a-134a14319b6e/volumes" Dec 16 12:15:20.493806 containerd[2156]: time="2025-12-16T12:15:20.493697190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:20.497664 containerd[2156]: time="2025-12-16T12:15:20.497564693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:15:20.497664 containerd[2156]: time="2025-12-16T12:15:20.497626875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:20.499434 kubelet[3702]: E1216 12:15:20.499392 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:15:20.499510 kubelet[3702]: E1216 12:15:20.499454 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:15:20.504710 kubelet[3702]: E1216 12:15:20.504643 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b28635fdc3bf4ca29f7ba01fb74863fe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwb7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98b9677f6-82g96_calico-system(738872a1-0466-4442-a71b-b4f7bae6b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:20.506930 containerd[2156]: time="2025-12-16T12:15:20.506899078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:15:20.524485 kubelet[3702]: I1216 12:15:20.524398 3702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:15:20.549000 audit[4830]: NETFILTER_CFG table=filter:122 family=2 entries=21 op=nft_register_rule pid=4830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:20.549000 audit[4830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff1dcb3a0 a2=0 a3=1 items=0 ppid=3860 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:20.554000 audit[4830]: NETFILTER_CFG table=nat:123 family=2 entries=19 op=nft_register_chain pid=4830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:20.554000 audit[4830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff1dcb3a0 a2=0 a3=1 items=0 ppid=3860 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:20.782000 audit: BPF prog-id=204 op=LOAD Dec 16 12:15:20.782000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1289758 a2=98 a3=fffff1289748 items=0 ppid=4846 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:20.782000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:15:20.782000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff1289728 a3=0 items=0 ppid=4846 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:20.782000 audit: BPF prog-id=205 op=LOAD Dec 16 12:15:20.782000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1289608 a2=74 a3=95 items=0 ppid=4846 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:20.782000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:15:20.782000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4846 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:20.782000 audit: BPF prog-id=206 op=LOAD Dec 16 12:15:20.782000 audit[4931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1289638 a2=40 a3=fffff1289668 items=0 ppid=4846 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:20.782000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:15:20.782000 audit[4931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff1289668 items=0 ppid=4846 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.782000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:15:20.784000 audit: BPF prog-id=207 op=LOAD Dec 16 12:15:20.784000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd9217ed8 a2=98 a3=ffffd9217ec8 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.785731 containerd[2156]: time="2025-12-16T12:15:20.785635869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:20.784000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:15:20.784000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd9217ea8 a3=0 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.785000 audit: BPF prog-id=208 op=LOAD Dec 16 12:15:20.785000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd9217b68 a2=74 a3=95 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.785000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.785000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:15:20.785000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.785000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.786000 audit: BPF prog-id=209 op=LOAD Dec 16 12:15:20.786000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd9217bc8 a2=94 a3=2 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.786000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.786000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:15:20.786000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.786000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.788979 containerd[2156]: time="2025-12-16T12:15:20.788930689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:20.789055 containerd[2156]: time="2025-12-16T12:15:20.789013169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:15:20.789529 kubelet[3702]: E1216 12:15:20.789409 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:15:20.789631 kubelet[3702]: E1216 12:15:20.789542 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:15:20.790407 kubelet[3702]: E1216 12:15:20.789755 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwb7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98b9677f6-82g96_calico-system(738872a1-0466-4442-a71b-b4f7bae6b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:20.791049 kubelet[3702]: E1216 12:15:20.791007 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:15:20.894190 kubelet[3702]: I1216 12:15:20.893699 3702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:15:20.908000 audit: BPF prog-id=210 op=LOAD Dec 16 12:15:20.908000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd9217b88 a2=40 a3=ffffd9217bb8 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.908000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.909000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:15:20.909000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd9217bb8 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.909000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.916000 audit: BPF prog-id=211 op=LOAD Dec 16 12:15:20.916000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd9217b98 a2=94 a3=4 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.916000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:15:20.916000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.917000 audit: BPF prog-id=212 op=LOAD Dec 16 12:15:20.917000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd92179d8 a2=94 a3=5 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.917000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.917000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:15:20.917000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.917000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.917000 audit: BPF prog-id=213 op=LOAD Dec 16 12:15:20.917000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd9217c08 a2=94 a3=6 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.917000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.917000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:15:20.917000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.917000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.917000 audit: BPF prog-id=214 op=LOAD Dec 16 12:15:20.917000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd92173d8 a2=94 a3=83 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.917000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.917000 audit: BPF prog-id=215 op=LOAD Dec 16 12:15:20.917000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd9217198 a2=94 a3=2 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.917000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.917000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:15:20.917000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.917000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.918000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:15:20.918000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3eed6620 a3=3eec9b00 items=0 ppid=4846 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.918000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:15:20.929000 audit: BPF prog-id=216 op=LOAD Dec 16 12:15:20.929000 audit[4975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe74e2478 a2=98 a3=ffffe74e2468 items=0 ppid=4846 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:20.929000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:15:20.929000 audit[4975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe74e2448 a3=0 items=0 ppid=4846 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:20.929000 audit: BPF prog-id=217 op=LOAD Dec 16 12:15:20.929000 audit[4975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe74e2328 a2=74 a3=95 items=0 ppid=4846 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:20.929000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:15:20.929000 audit[4975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4846 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:20.929000 audit: BPF prog-id=218 op=LOAD Dec 16 12:15:20.929000 audit[4975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe74e2358 a2=40 a3=ffffe74e2388 items=0 ppid=4846 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:20.929000 audit: BPF prog-id=218 op=UNLOAD Dec 16 12:15:20.929000 audit[4975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe74e2388 items=0 ppid=4846 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:20.929000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:15:21.064630 systemd-networkd[1754]: vxlan.calico: Link UP Dec 16 12:15:21.084000 audit: BPF prog-id=219 op=LOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6ff9678 a2=98 a3=fffff6ff9668 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.084000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff6ff9648 a3=0 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.084000 audit: BPF prog-id=220 op=LOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6ff9358 a2=74 a3=95 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.084000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.084000 audit: BPF prog-id=221 op=LOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6ff93b8 a2=94 a3=2 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.084000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.084000 audit: BPF prog-id=222 op=LOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6ff9238 a2=40 a3=fffff6ff9268 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.084000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff6ff9268 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.084000 audit: BPF prog-id=223 op=LOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6ff9388 a2=94 a3=b7 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.084000 audit: BPF prog-id=223 op=UNLOAD Dec 16 12:15:21.084000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.084000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.086000 audit: BPF prog-id=224 op=LOAD Dec 16 12:15:21.086000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6ff8a38 a2=94 a3=2 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.086000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.086000 audit: BPF prog-id=224 op=UNLOAD Dec 16 12:15:21.086000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.086000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.086000 audit: BPF prog-id=225 op=LOAD Dec 16 12:15:21.086000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6ff8bc8 a2=94 a3=30 items=0 ppid=4846 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.086000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:15:21.092000 audit: BPF prog-id=226 op=LOAD Dec 16 12:15:21.092000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffda1ed1c8 a2=98 a3=ffffda1ed1b8 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.092000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.092000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:15:21.092000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffda1ed198 a3=0 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.092000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.093000 audit: BPF prog-id=227 op=LOAD Dec 16 12:15:21.093000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffda1ece58 a2=74 a3=95 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.093000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:15:21.093000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.093000 audit: BPF prog-id=228 op=LOAD Dec 16 12:15:21.093000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffda1eceb8 a2=94 a3=2 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.093000 audit: BPF prog-id=228 op=UNLOAD Dec 16 12:15:21.093000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.064636 systemd-networkd[1754]: vxlan.calico: Gained carrier Dec 16 12:15:21.178000 audit: BPF prog-id=229 op=LOAD Dec 16 12:15:21.178000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffda1ece78 a2=40 a3=ffffda1ecea8 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.178000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.178000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:15:21.178000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffda1ecea8 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.178000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.185000 audit: BPF prog-id=230 op=LOAD Dec 16 12:15:21.185000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffda1ece88 a2=94 a3=4 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.185000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.185000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:15:21.185000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.185000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.186000 audit: BPF prog-id=231 op=LOAD Dec 16 12:15:21.186000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffda1eccc8 a2=94 a3=5 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.186000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:15:21.186000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.186000 audit: BPF prog-id=232 op=LOAD Dec 16 12:15:21.186000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffda1ecef8 a2=94 a3=6 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.186000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:15:21.186000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.186000 audit: BPF prog-id=233 op=LOAD Dec 16 12:15:21.186000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffda1ec6c8 a2=94 a3=83 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.186000 audit: BPF prog-id=234 op=LOAD Dec 16 12:15:21.186000 audit[5035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffda1ec488 a2=94 a3=2 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.186000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:15:21.186000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.186000 audit: BPF prog-id=233 op=UNLOAD Dec 16 12:15:21.186000 audit[5035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=897c620 a3=896fb00 items=0 ppid=4846 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.186000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:15:21.190000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:15:21.190000 audit[4846]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000752300 a2=0 a3=0 items=0 ppid=4835 pid=4846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.190000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:15:21.317000 audit[5057]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=5057 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:21.317000 audit[5057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd0033840 a2=0 a3=ffff9becdfa8 items=0 ppid=4846 pid=5057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.317000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:21.319000 audit[5059]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5059 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:21.319000 audit[5059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc922df80 a2=0 a3=ffff9882efa8 items=0 ppid=4846 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.319000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:21.326000 audit[5062]: NETFILTER_CFG table=raw:126 family=2 entries=21 op=nft_register_chain pid=5062 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:21.326000 audit[5062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd9916700 a2=0 a3=ffffac65cfa8 items=0 ppid=4846 pid=5062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.326000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:21.336000 audit[5058]: NETFILTER_CFG table=filter:127 family=2 entries=94 op=nft_register_chain pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:21.336000 audit[5058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffd7012da0 a2=0 a3=ffff82a40fa8 items=0 ppid=4846 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.336000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:21.590830 kubelet[3702]: E1216 12:15:21.590285 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:15:21.609608 systemd-networkd[1754]: cali3b6472485fe: Gained IPv6LL Dec 16 12:15:21.615000 audit[5074]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:21.615000 audit[5074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc3160ed0 a2=0 a3=1 items=0 ppid=3860 pid=5074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:21.620000 audit[5074]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:21.620000 audit[5074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc3160ed0 a2=0 a3=1 items=0 ppid=3860 pid=5074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:21.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:22.440629 systemd-networkd[1754]: vxlan.calico: Gained IPv6LL Dec 16 12:15:22.482457 containerd[2156]: time="2025-12-16T12:15:22.482243605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547b944668-qhgzg,Uid:7de63821-b623-478a-a40e-6502071e35ea,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:22.580564 systemd-networkd[1754]: calib7fbda19f3e: Link UP Dec 16 12:15:22.581291 systemd-networkd[1754]: calib7fbda19f3e: Gained carrier Dec 16 12:15:22.600612 containerd[2156]: 2025-12-16 12:15:22.523 [INFO][5077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0 calico-kube-controllers-547b944668- calico-system 7de63821-b623-478a-a40e-6502071e35ea 814 0 2025-12-16 12:15:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:547b944668 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-a-4d45b340a5 calico-kube-controllers-547b944668-qhgzg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib7fbda19f3e [] [] }} ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Namespace="calico-system" Pod="calico-kube-controllers-547b944668-qhgzg" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-" Dec 16 12:15:22.600612 containerd[2156]: 2025-12-16 12:15:22.523 [INFO][5077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Namespace="calico-system" Pod="calico-kube-controllers-547b944668-qhgzg" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" Dec 16 12:15:22.600612 containerd[2156]: 2025-12-16 12:15:22.542 [INFO][5089] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" HandleID="k8s-pod-network.f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Workload="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" Dec 16 12:15:22.600877 containerd[2156]: 2025-12-16 12:15:22.542 [INFO][5089] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" HandleID="k8s-pod-network.f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Workload="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-4d45b340a5", "pod":"calico-kube-controllers-547b944668-qhgzg", "timestamp":"2025-12-16 12:15:22.542564943 +0000 UTC"}, Hostname:"ci-4547.0.0-a-4d45b340a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:22.600877 containerd[2156]: 2025-12-16 12:15:22.542 [INFO][5089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:22.600877 containerd[2156]: 2025-12-16 12:15:22.542 [INFO][5089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:22.600877 containerd[2156]: 2025-12-16 12:15:22.542 [INFO][5089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-4d45b340a5' Dec 16 12:15:22.600877 containerd[2156]: 2025-12-16 12:15:22.548 [INFO][5089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:22.600877 containerd[2156]: 2025-12-16 12:15:22.552 [INFO][5089] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:22.600877 containerd[2156]: 2025-12-16 12:15:22.556 [INFO][5089] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:22.600877 containerd[2156]: 2025-12-16 12:15:22.558 [INFO][5089] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:22.600877 containerd[2156]: 2025-12-16 12:15:22.560 [INFO][5089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:22.601081 containerd[2156]: 2025-12-16 12:15:22.560 [INFO][5089] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:22.601081 containerd[2156]: 2025-12-16 12:15:22.561 [INFO][5089] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9 Dec 16 12:15:22.601081 containerd[2156]: 2025-12-16 12:15:22.565 [INFO][5089] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:22.601081 containerd[2156]: 2025-12-16 12:15:22.574 [INFO][5089] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.130/26] block=192.168.65.128/26 handle="k8s-pod-network.f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:22.601081 containerd[2156]: 2025-12-16 12:15:22.574 [INFO][5089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.130/26] handle="k8s-pod-network.f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:22.601081 containerd[2156]: 2025-12-16 12:15:22.574 [INFO][5089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:22.601081 containerd[2156]: 2025-12-16 12:15:22.575 [INFO][5089] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.130/26] IPv6=[] ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" HandleID="k8s-pod-network.f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Workload="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" Dec 16 12:15:22.601196 containerd[2156]: 2025-12-16 12:15:22.576 [INFO][5077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Namespace="calico-system" Pod="calico-kube-controllers-547b944668-qhgzg" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0", GenerateName:"calico-kube-controllers-547b944668-", Namespace:"calico-system", SelfLink:"", UID:"7de63821-b623-478a-a40e-6502071e35ea", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547b944668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"", Pod:"calico-kube-controllers-547b944668-qhgzg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7fbda19f3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:22.601239 containerd[2156]: 2025-12-16 12:15:22.576 [INFO][5077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.130/32] ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Namespace="calico-system" Pod="calico-kube-controllers-547b944668-qhgzg" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" Dec 16 12:15:22.601239 containerd[2156]: 2025-12-16 12:15:22.576 [INFO][5077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7fbda19f3e ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Namespace="calico-system" Pod="calico-kube-controllers-547b944668-qhgzg" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" Dec 16 12:15:22.601239 containerd[2156]: 2025-12-16 12:15:22.582 [INFO][5077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Namespace="calico-system" Pod="calico-kube-controllers-547b944668-qhgzg" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" Dec 16 12:15:22.601293 containerd[2156]: 2025-12-16 12:15:22.583 [INFO][5077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Namespace="calico-system" Pod="calico-kube-controllers-547b944668-qhgzg" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0", GenerateName:"calico-kube-controllers-547b944668-", Namespace:"calico-system", SelfLink:"", UID:"7de63821-b623-478a-a40e-6502071e35ea", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547b944668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9", Pod:"calico-kube-controllers-547b944668-qhgzg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7fbda19f3e", MAC:"52:85:7d:d9:a2:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:22.601438 containerd[2156]: 2025-12-16 12:15:22.597 [INFO][5077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" Namespace="calico-system" Pod="calico-kube-controllers-547b944668-qhgzg" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--kube--controllers--547b944668--qhgzg-eth0" Dec 16 12:15:22.610000 audit[5103]: NETFILTER_CFG table=filter:130 family=2 entries=36 op=nft_register_chain pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:22.610000 audit[5103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffffb125900 a2=0 a3=ffff9c93efa8 items=0 ppid=4846 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:22.610000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:22.644504 containerd[2156]: time="2025-12-16T12:15:22.644421006Z" level=info msg="connecting to shim f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9" address="unix:///run/containerd/s/008ddbc9c6a24a3adf1df08a7664ff7a9519b448ed814f095c42af3d740b4882" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:22.666666 systemd[1]: Started cri-containerd-f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9.scope - libcontainer container f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9. Dec 16 12:15:22.674000 audit: BPF prog-id=235 op=LOAD Dec 16 12:15:22.675000 audit: BPF prog-id=236 op=LOAD Dec 16 12:15:22.675000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5114 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:22.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637643937383738626664373732313233333261386430383431386435 Dec 16 12:15:22.675000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:15:22.675000 audit[5125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5114 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:22.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637643937383738626664373732313233333261386430383431386435 Dec 16 12:15:22.675000 audit: BPF prog-id=237 op=LOAD Dec 16 12:15:22.675000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5114 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:22.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637643937383738626664373732313233333261386430383431386435 Dec 16 12:15:22.675000 audit: BPF prog-id=238 op=LOAD Dec 16 12:15:22.675000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5114 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:22.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637643937383738626664373732313233333261386430383431386435 Dec 16 12:15:22.675000 audit: BPF prog-id=238 op=UNLOAD Dec 16 12:15:22.675000 audit[5125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5114 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:22.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637643937383738626664373732313233333261386430383431386435 Dec 16 12:15:22.676000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:15:22.676000 audit[5125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5114 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:22.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637643937383738626664373732313233333261386430383431386435 Dec 16 12:15:22.676000 audit: BPF prog-id=239 op=LOAD Dec 16 12:15:22.676000 audit[5125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5114 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:22.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637643937383738626664373732313233333261386430383431386435 Dec 16 12:15:22.702685 containerd[2156]: time="2025-12-16T12:15:22.702467415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547b944668-qhgzg,Uid:7de63821-b623-478a-a40e-6502071e35ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7d97878bfd77212332a8d08418d5745b4f443b5ef2faaee2832ed9844d4e1f9\"" Dec 16 12:15:22.705126 containerd[2156]: time="2025-12-16T12:15:22.705078651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:15:22.995085 containerd[2156]: time="2025-12-16T12:15:22.994833682Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:22.997982 containerd[2156]: time="2025-12-16T12:15:22.997938438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:15:22.998077 containerd[2156]: time="2025-12-16T12:15:22.997953743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:22.998267 kubelet[3702]: E1216 12:15:22.998218 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:15:22.998676 kubelet[3702]: E1216 12:15:22.998280 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:15:22.998676 kubelet[3702]: E1216 12:15:22.998435 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fblkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547b944668-qhgzg_calico-system(7de63821-b623-478a-a40e-6502071e35ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:23.000105 kubelet[3702]: E1216 12:15:23.000074 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:15:23.597035 kubelet[3702]: E1216 12:15:23.596984 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:15:24.482355 containerd[2156]: time="2025-12-16T12:15:24.482226908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfrxj,Uid:8828f39e-aa8e-4fd0-b7c6-7777cb3950e2,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:24.488849 systemd-networkd[1754]: calib7fbda19f3e: Gained IPv6LL Dec 16 12:15:24.599046 kubelet[3702]: E1216 12:15:24.599001 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:15:24.630540 systemd-networkd[1754]: calie993a03ebda: Link UP Dec 16 12:15:24.630731 systemd-networkd[1754]: calie993a03ebda: Gained carrier Dec 16 12:15:24.655639 containerd[2156]: 2025-12-16 12:15:24.515 [INFO][5152] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0 coredns-674b8bbfcf- kube-system 8828f39e-aa8e-4fd0-b7c6-7777cb3950e2 818 0 2025-12-16 12:14:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-4d45b340a5 coredns-674b8bbfcf-gfrxj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie993a03ebda [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfrxj" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-" Dec 16 12:15:24.655639 containerd[2156]: 2025-12-16 12:15:24.515 [INFO][5152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfrxj" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" Dec 16 12:15:24.655639 containerd[2156]: 2025-12-16 12:15:24.541 [INFO][5164] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" HandleID="k8s-pod-network.5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Workload="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" Dec 16 12:15:24.656093 containerd[2156]: 2025-12-16 12:15:24.541 [INFO][5164] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" HandleID="k8s-pod-network.5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Workload="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b170), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-4d45b340a5", "pod":"coredns-674b8bbfcf-gfrxj", "timestamp":"2025-12-16 12:15:24.541311482 +0000 UTC"}, Hostname:"ci-4547.0.0-a-4d45b340a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:24.656093 containerd[2156]: 2025-12-16 12:15:24.541 [INFO][5164] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:24.656093 containerd[2156]: 2025-12-16 12:15:24.541 [INFO][5164] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:24.656093 containerd[2156]: 2025-12-16 12:15:24.541 [INFO][5164] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-4d45b340a5' Dec 16 12:15:24.656093 containerd[2156]: 2025-12-16 12:15:24.548 [INFO][5164] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:24.656093 containerd[2156]: 2025-12-16 12:15:24.551 [INFO][5164] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:24.656093 containerd[2156]: 2025-12-16 12:15:24.555 [INFO][5164] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:24.656093 containerd[2156]: 2025-12-16 12:15:24.557 [INFO][5164] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:24.656093 containerd[2156]: 2025-12-16 12:15:24.560 [INFO][5164] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:24.656369 containerd[2156]: 2025-12-16 12:15:24.560 [INFO][5164] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:24.656369 containerd[2156]: 2025-12-16 12:15:24.562 [INFO][5164] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b Dec 16 12:15:24.656369 containerd[2156]: 2025-12-16 12:15:24.585 [INFO][5164] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:24.656369 containerd[2156]: 2025-12-16 12:15:24.622 [INFO][5164] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.131/26] block=192.168.65.128/26 handle="k8s-pod-network.5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:24.656369 containerd[2156]: 2025-12-16 12:15:24.623 [INFO][5164] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.131/26] handle="k8s-pod-network.5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:24.656369 containerd[2156]: 2025-12-16 12:15:24.623 [INFO][5164] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:24.656369 containerd[2156]: 2025-12-16 12:15:24.623 [INFO][5164] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.131/26] IPv6=[] ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" HandleID="k8s-pod-network.5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Workload="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" Dec 16 12:15:24.656622 containerd[2156]: 2025-12-16 12:15:24.625 [INFO][5152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfrxj" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8828f39e-aa8e-4fd0-b7c6-7777cb3950e2", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"", Pod:"coredns-674b8bbfcf-gfrxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie993a03ebda", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:24.656622 containerd[2156]: 2025-12-16 12:15:24.625 [INFO][5152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.131/32] ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfrxj" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" Dec 16 12:15:24.656622 containerd[2156]: 2025-12-16 12:15:24.625 [INFO][5152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie993a03ebda ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfrxj" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" Dec 16 12:15:24.656622 containerd[2156]: 2025-12-16 12:15:24.628 [INFO][5152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfrxj" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" Dec 16 12:15:24.656622 containerd[2156]: 2025-12-16 12:15:24.631 [INFO][5152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfrxj" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8828f39e-aa8e-4fd0-b7c6-7777cb3950e2", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b", Pod:"coredns-674b8bbfcf-gfrxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie993a03ebda", MAC:"96:04:f3:42:24:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:24.656622 containerd[2156]: 2025-12-16 12:15:24.652 [INFO][5152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfrxj" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--gfrxj-eth0" Dec 16 12:15:24.668000 audit[5178]: NETFILTER_CFG table=filter:131 family=2 entries=46 op=nft_register_chain pid=5178 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:24.673196 kernel: kauditd_printk_skb: 262 callbacks suppressed Dec 16 12:15:24.673286 kernel: audit: type=1325 audit(1765887324.668:688): table=filter:131 family=2 entries=46 op=nft_register_chain pid=5178 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:24.668000 audit[5178]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffd46b5550 a2=0 a3=ffffa30a2fa8 items=0 ppid=4846 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.705552 kernel: audit: type=1300 audit(1765887324.668:688): arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffd46b5550 a2=0 a3=ffffa30a2fa8 items=0 ppid=4846 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.668000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:24.720659 kernel: audit: type=1327 audit(1765887324.668:688): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:24.744527 containerd[2156]: time="2025-12-16T12:15:24.743912188Z" level=info msg="connecting to shim 5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b" address="unix:///run/containerd/s/45034d6886942fd407e934002b2546acc8106d23a53978840b866130af21bc4e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:24.771665 systemd[1]: Started cri-containerd-5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b.scope - libcontainer container 5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b. Dec 16 12:15:24.779000 audit: BPF prog-id=240 op=LOAD Dec 16 12:15:24.785600 kernel: audit: type=1334 audit(1765887324.779:689): prog-id=240 op=LOAD Dec 16 12:15:24.784000 audit: BPF prog-id=241 op=LOAD Dec 16 12:15:24.784000 audit[5199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5188 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.809784 kernel: audit: type=1334 audit(1765887324.784:690): prog-id=241 op=LOAD Dec 16 12:15:24.809870 kernel: audit: type=1300 audit(1765887324.784:690): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5188 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386436343563326133323064343537376232353062656336353762 Dec 16 12:15:24.828656 kernel: audit: type=1327 audit(1765887324.784:690): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386436343563326133323064343537376232353062656336353762 Dec 16 12:15:24.784000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:15:24.833836 kernel: audit: type=1334 audit(1765887324.784:691): prog-id=241 op=UNLOAD Dec 16 12:15:24.784000 audit[5199]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5188 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.851242 kernel: audit: type=1300 audit(1765887324.784:691): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5188 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386436343563326133323064343537376232353062656336353762 Dec 16 12:15:24.870171 kernel: audit: type=1327 audit(1765887324.784:691): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386436343563326133323064343537376232353062656336353762 Dec 16 12:15:24.784000 audit: BPF prog-id=242 op=LOAD Dec 16 12:15:24.784000 audit[5199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5188 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386436343563326133323064343537376232353062656336353762 Dec 16 12:15:24.789000 audit: BPF prog-id=243 op=LOAD Dec 16 12:15:24.789000 audit[5199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5188 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386436343563326133323064343537376232353062656336353762 Dec 16 12:15:24.789000 audit: BPF prog-id=243 op=UNLOAD Dec 16 12:15:24.789000 audit[5199]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5188 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386436343563326133323064343537376232353062656336353762 Dec 16 12:15:24.789000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:15:24.789000 audit[5199]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5188 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386436343563326133323064343537376232353062656336353762 Dec 16 12:15:24.789000 audit: BPF prog-id=244 op=LOAD Dec 16 12:15:24.789000 audit[5199]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5188 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386436343563326133323064343537376232353062656336353762 Dec 16 12:15:24.884533 containerd[2156]: time="2025-12-16T12:15:24.884496859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfrxj,Uid:8828f39e-aa8e-4fd0-b7c6-7777cb3950e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b\"" Dec 16 12:15:24.893249 containerd[2156]: time="2025-12-16T12:15:24.893151626Z" level=info msg="CreateContainer within sandbox \"5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:15:24.921010 containerd[2156]: time="2025-12-16T12:15:24.920563780Z" level=info msg="Container 75348ef3eee120852d6027ca19c01ee457e403cda8efaae33acac3e860cc4fde: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:24.939360 containerd[2156]: time="2025-12-16T12:15:24.939323431Z" level=info msg="CreateContainer within sandbox \"5d8d645c2a320d4577b250bec657b43268fd715719aa72bb05a1aaee2ea0ac8b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75348ef3eee120852d6027ca19c01ee457e403cda8efaae33acac3e860cc4fde\"" Dec 16 12:15:24.940655 containerd[2156]: time="2025-12-16T12:15:24.940632217Z" level=info msg="StartContainer for \"75348ef3eee120852d6027ca19c01ee457e403cda8efaae33acac3e860cc4fde\"" Dec 16 12:15:24.941784 containerd[2156]: time="2025-12-16T12:15:24.941765245Z" level=info msg="connecting to shim 75348ef3eee120852d6027ca19c01ee457e403cda8efaae33acac3e860cc4fde" address="unix:///run/containerd/s/45034d6886942fd407e934002b2546acc8106d23a53978840b866130af21bc4e" protocol=ttrpc version=3 Dec 16 12:15:24.957638 systemd[1]: Started cri-containerd-75348ef3eee120852d6027ca19c01ee457e403cda8efaae33acac3e860cc4fde.scope - libcontainer container 75348ef3eee120852d6027ca19c01ee457e403cda8efaae33acac3e860cc4fde. Dec 16 12:15:24.964000 audit: BPF prog-id=245 op=LOAD Dec 16 12:15:24.965000 audit: BPF prog-id=246 op=LOAD Dec 16 12:15:24.965000 audit[5225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5188 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735333438656633656565313230383532643630323763613139633031 Dec 16 12:15:24.965000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:15:24.965000 audit[5225]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5188 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735333438656633656565313230383532643630323763613139633031 Dec 16 12:15:24.965000 audit: BPF prog-id=247 op=LOAD Dec 16 12:15:24.965000 audit[5225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5188 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735333438656633656565313230383532643630323763613139633031 Dec 16 12:15:24.965000 audit: BPF prog-id=248 op=LOAD Dec 16 12:15:24.965000 audit[5225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5188 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735333438656633656565313230383532643630323763613139633031 Dec 16 12:15:24.965000 audit: BPF prog-id=248 op=UNLOAD Dec 16 12:15:24.965000 audit[5225]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5188 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735333438656633656565313230383532643630323763613139633031 Dec 16 12:15:24.965000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:15:24.965000 audit[5225]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5188 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735333438656633656565313230383532643630323763613139633031 Dec 16 12:15:24.965000 audit: BPF prog-id=249 op=LOAD Dec 16 12:15:24.965000 audit[5225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5188 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:24.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735333438656633656565313230383532643630323763613139633031 Dec 16 12:15:24.987209 containerd[2156]: time="2025-12-16T12:15:24.987177157Z" level=info msg="StartContainer for \"75348ef3eee120852d6027ca19c01ee457e403cda8efaae33acac3e860cc4fde\" returns successfully" Dec 16 12:15:25.481292 containerd[2156]: time="2025-12-16T12:15:25.481230560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74755c97b7-c55sp,Uid:e0f8af17-5d0e-41d3-8143-e682bcff58c4,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:15:25.481621 containerd[2156]: time="2025-12-16T12:15:25.481230512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7x78m,Uid:c365101f-0c2a-4266-abb7-2136287ff3ab,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:25.611366 systemd-networkd[1754]: cali9ec7e9bfcff: Link UP Dec 16 12:15:25.612596 systemd-networkd[1754]: cali9ec7e9bfcff: Gained carrier Dec 16 12:15:25.623384 kubelet[3702]: I1216 12:15:25.623169 3702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gfrxj" podStartSLOduration=41.623151884 podStartE2EDuration="41.623151884s" podCreationTimestamp="2025-12-16 12:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:25.621880869 +0000 UTC m=+47.228498148" watchObservedRunningTime="2025-12-16 12:15:25.623151884 +0000 UTC m=+47.229769131" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.542 [INFO][5258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0 calico-apiserver-74755c97b7- calico-apiserver e0f8af17-5d0e-41d3-8143-e682bcff58c4 821 0 2025-12-16 12:14:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74755c97b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-4d45b340a5 calico-apiserver-74755c97b7-c55sp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ec7e9bfcff [] [] }} ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-c55sp" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.542 [INFO][5258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-c55sp" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.564 [INFO][5287] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" HandleID="k8s-pod-network.27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Workload="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.564 [INFO][5287] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" HandleID="k8s-pod-network.27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Workload="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-4d45b340a5", "pod":"calico-apiserver-74755c97b7-c55sp", "timestamp":"2025-12-16 12:15:25.564624044 +0000 UTC"}, Hostname:"ci-4547.0.0-a-4d45b340a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.564 [INFO][5287] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.564 [INFO][5287] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.564 [INFO][5287] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-4d45b340a5' Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.571 [INFO][5287] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.575 [INFO][5287] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.579 [INFO][5287] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.581 [INFO][5287] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.583 [INFO][5287] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.583 [INFO][5287] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.584 [INFO][5287] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53 Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.591 [INFO][5287] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.597 [INFO][5287] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.132/26] block=192.168.65.128/26 handle="k8s-pod-network.27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.597 [INFO][5287] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.132/26] handle="k8s-pod-network.27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.597 [INFO][5287] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:25.630820 containerd[2156]: 2025-12-16 12:15:25.597 [INFO][5287] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.132/26] IPv6=[] ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" HandleID="k8s-pod-network.27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Workload="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" Dec 16 12:15:25.631371 containerd[2156]: 2025-12-16 12:15:25.605 [INFO][5258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-c55sp" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0", GenerateName:"calico-apiserver-74755c97b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0f8af17-5d0e-41d3-8143-e682bcff58c4", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74755c97b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"", Pod:"calico-apiserver-74755c97b7-c55sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ec7e9bfcff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:25.631371 containerd[2156]: 2025-12-16 12:15:25.605 [INFO][5258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.132/32] ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-c55sp" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" Dec 16 12:15:25.631371 containerd[2156]: 2025-12-16 12:15:25.605 [INFO][5258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ec7e9bfcff ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-c55sp" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" Dec 16 12:15:25.631371 containerd[2156]: 2025-12-16 12:15:25.614 [INFO][5258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-c55sp" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" Dec 16 12:15:25.631371 containerd[2156]: 2025-12-16 12:15:25.615 [INFO][5258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-c55sp" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0", GenerateName:"calico-apiserver-74755c97b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0f8af17-5d0e-41d3-8143-e682bcff58c4", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74755c97b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53", Pod:"calico-apiserver-74755c97b7-c55sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ec7e9bfcff", MAC:"ea:a0:48:3d:4c:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:25.631371 containerd[2156]: 2025-12-16 12:15:25.626 [INFO][5258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-c55sp" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--c55sp-eth0" Dec 16 12:15:25.639000 audit[5302]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:25.639000 audit[5302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcc541390 a2=0 a3=1 items=0 ppid=3860 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:25.644000 audit[5302]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:25.644000 audit[5302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcc541390 a2=0 a3=1 items=0 ppid=3860 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:25.662000 audit[5305]: NETFILTER_CFG table=filter:134 family=2 entries=58 op=nft_register_chain pid=5305 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:25.662000 audit[5305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30584 a0=3 a1=ffffc6e62070 a2=0 a3=ffffaf59ffa8 items=0 ppid=4846 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.662000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:25.674000 audit[5307]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:25.674000 audit[5307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe20cfe50 a2=0 a3=1 items=0 ppid=3860 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:25.678000 audit[5307]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=5307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:25.678000 audit[5307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe20cfe50 a2=0 a3=1 items=0 ppid=3860 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:25.695137 containerd[2156]: time="2025-12-16T12:15:25.695025456Z" level=info msg="connecting to shim 27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53" address="unix:///run/containerd/s/2846bdacf03473b611033f8370dcde21477c21beb0c911572432659353220279" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:25.724865 systemd-networkd[1754]: calic59047bc8ee: Link UP Dec 16 12:15:25.725794 systemd-networkd[1754]: calic59047bc8ee: Gained carrier Dec 16 12:15:25.727684 systemd[1]: Started cri-containerd-27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53.scope - libcontainer container 27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53. Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.537 [INFO][5262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0 goldmane-666569f655- calico-system c365101f-0c2a-4266-abb7-2136287ff3ab 819 0 2025-12-16 12:14:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-a-4d45b340a5 goldmane-666569f655-7x78m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic59047bc8ee [] [] }} ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Namespace="calico-system" Pod="goldmane-666569f655-7x78m" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.538 [INFO][5262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Namespace="calico-system" Pod="goldmane-666569f655-7x78m" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.568 [INFO][5282] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" HandleID="k8s-pod-network.550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Workload="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.568 [INFO][5282] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" HandleID="k8s-pod-network.550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Workload="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-4d45b340a5", "pod":"goldmane-666569f655-7x78m", "timestamp":"2025-12-16 12:15:25.568258659 +0000 UTC"}, Hostname:"ci-4547.0.0-a-4d45b340a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.568 [INFO][5282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.597 [INFO][5282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.598 [INFO][5282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-4d45b340a5' Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.671 [INFO][5282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.681 [INFO][5282] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.687 [INFO][5282] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.689 [INFO][5282] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.693 [INFO][5282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.693 [INFO][5282] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.696 [INFO][5282] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86 Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.704 [INFO][5282] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.713 [INFO][5282] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.133/26] block=192.168.65.128/26 handle="k8s-pod-network.550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.713 [INFO][5282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.133/26] handle="k8s-pod-network.550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.713 [INFO][5282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:25.747910 containerd[2156]: 2025-12-16 12:15:25.713 [INFO][5282] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.133/26] IPv6=[] ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" HandleID="k8s-pod-network.550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Workload="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" Dec 16 12:15:25.748935 containerd[2156]: 2025-12-16 12:15:25.716 [INFO][5262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Namespace="calico-system" Pod="goldmane-666569f655-7x78m" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c365101f-0c2a-4266-abb7-2136287ff3ab", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"", Pod:"goldmane-666569f655-7x78m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic59047bc8ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:25.748935 containerd[2156]: 2025-12-16 12:15:25.717 [INFO][5262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.133/32] ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Namespace="calico-system" Pod="goldmane-666569f655-7x78m" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" Dec 16 12:15:25.748935 containerd[2156]: 2025-12-16 12:15:25.717 [INFO][5262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic59047bc8ee ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Namespace="calico-system" Pod="goldmane-666569f655-7x78m" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" Dec 16 12:15:25.748935 containerd[2156]: 2025-12-16 12:15:25.725 [INFO][5262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Namespace="calico-system" Pod="goldmane-666569f655-7x78m" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" Dec 16 12:15:25.748935 containerd[2156]: 2025-12-16 12:15:25.726 [INFO][5262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Namespace="calico-system" Pod="goldmane-666569f655-7x78m" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c365101f-0c2a-4266-abb7-2136287ff3ab", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86", Pod:"goldmane-666569f655-7x78m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic59047bc8ee", MAC:"2e:6d:34:f6:f4:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:25.748935 containerd[2156]: 2025-12-16 12:15:25.741 [INFO][5262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" Namespace="calico-system" Pod="goldmane-666569f655-7x78m" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-goldmane--666569f655--7x78m-eth0" Dec 16 12:15:25.752000 audit: BPF prog-id=250 op=LOAD Dec 16 12:15:25.753000 audit: BPF prog-id=251 op=LOAD Dec 16 12:15:25.753000 audit[5327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5316 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633566313431333835373733656661656234613130326565353933 Dec 16 12:15:25.754000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:15:25.754000 audit[5327]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5316 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633566313431333835373733656661656234613130326565353933 Dec 16 12:15:25.754000 audit: BPF prog-id=252 op=LOAD Dec 16 12:15:25.754000 audit[5327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5316 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633566313431333835373733656661656234613130326565353933 Dec 16 12:15:25.754000 audit: BPF prog-id=253 op=LOAD Dec 16 12:15:25.754000 audit[5327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5316 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633566313431333835373733656661656234613130326565353933 Dec 16 12:15:25.754000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:15:25.754000 audit[5327]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5316 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633566313431333835373733656661656234613130326565353933 Dec 16 12:15:25.754000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:15:25.754000 audit[5327]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5316 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633566313431333835373733656661656234613130326565353933 Dec 16 12:15:25.754000 audit: BPF prog-id=254 op=LOAD Dec 16 12:15:25.754000 audit[5327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5316 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633566313431333835373733656661656234613130326565353933 Dec 16 12:15:25.765000 audit[5357]: NETFILTER_CFG table=filter:137 family=2 entries=56 op=nft_register_chain pid=5357 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:25.765000 audit[5357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28744 a0=3 a1=ffffe7134920 a2=0 a3=ffff8b85afa8 items=0 ppid=4846 pid=5357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.765000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:25.787232 containerd[2156]: time="2025-12-16T12:15:25.787197420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74755c97b7-c55sp,Uid:e0f8af17-5d0e-41d3-8143-e682bcff58c4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"27c5f141385773efaeb4a102ee593912646056a6429bd60a9801b1a9ae4b1b53\"" Dec 16 12:15:25.789798 containerd[2156]: time="2025-12-16T12:15:25.789145754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:15:25.792867 containerd[2156]: time="2025-12-16T12:15:25.792844654Z" level=info msg="connecting to shim 550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86" address="unix:///run/containerd/s/44c37538e49682e7eb2f1cbd7776d248dda1be378d5ea5579408d1453819c5b5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:25.815724 systemd[1]: Started cri-containerd-550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86.scope - libcontainer container 550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86. Dec 16 12:15:25.825000 audit: BPF prog-id=255 op=LOAD Dec 16 12:15:25.825000 audit: BPF prog-id=256 op=LOAD Dec 16 12:15:25.825000 audit[5387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5376 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306533623662383562396165393635386637626537363130383134 Dec 16 12:15:25.825000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:15:25.825000 audit[5387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5376 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306533623662383562396165393635386637626537363130383134 Dec 16 12:15:25.826000 audit: BPF prog-id=257 op=LOAD Dec 16 12:15:25.826000 audit[5387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5376 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306533623662383562396165393635386637626537363130383134 Dec 16 12:15:25.826000 audit: BPF prog-id=258 op=LOAD Dec 16 12:15:25.826000 audit[5387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5376 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306533623662383562396165393635386637626537363130383134 Dec 16 12:15:25.826000 audit: BPF prog-id=258 op=UNLOAD Dec 16 12:15:25.826000 audit[5387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5376 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306533623662383562396165393635386637626537363130383134 Dec 16 12:15:25.826000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:15:25.826000 audit[5387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5376 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306533623662383562396165393635386637626537363130383134 Dec 16 12:15:25.826000 audit: BPF prog-id=259 op=LOAD Dec 16 12:15:25.826000 audit[5387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5376 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:25.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306533623662383562396165393635386637626537363130383134 Dec 16 12:15:25.853339 containerd[2156]: time="2025-12-16T12:15:25.853301187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7x78m,Uid:c365101f-0c2a-4266-abb7-2136287ff3ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"550e3b6b85b9ae9658f7be76108143f132900089e84b6062155750df967dbf86\"" Dec 16 12:15:25.896614 systemd-networkd[1754]: calie993a03ebda: Gained IPv6LL Dec 16 12:15:26.047672 containerd[2156]: time="2025-12-16T12:15:26.047534790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:26.050592 containerd[2156]: time="2025-12-16T12:15:26.050552771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:15:26.050674 containerd[2156]: time="2025-12-16T12:15:26.050642651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:26.050868 kubelet[3702]: E1216 12:15:26.050824 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:26.050921 kubelet[3702]: E1216 12:15:26.050876 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:26.051121 kubelet[3702]: E1216 12:15:26.051081 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jl6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74755c97b7-c55sp_calico-apiserver(e0f8af17-5d0e-41d3-8143-e682bcff58c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:26.051617 containerd[2156]: time="2025-12-16T12:15:26.051573998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:15:26.052974 kubelet[3702]: E1216 12:15:26.052934 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:15:26.307812 containerd[2156]: time="2025-12-16T12:15:26.307675889Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:26.311500 containerd[2156]: time="2025-12-16T12:15:26.311447211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:15:26.311590 containerd[2156]: time="2025-12-16T12:15:26.311555851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:26.311891 kubelet[3702]: E1216 12:15:26.311750 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:15:26.311891 kubelet[3702]: E1216 12:15:26.311804 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:15:26.312042 kubelet[3702]: E1216 12:15:26.312013 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgdkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7x78m_calico-system(c365101f-0c2a-4266-abb7-2136287ff3ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:26.313941 kubelet[3702]: E1216 12:15:26.313826 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:15:26.609467 kubelet[3702]: E1216 12:15:26.608652 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:15:26.613393 kubelet[3702]: E1216 12:15:26.613358 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:15:26.642000 audit[5422]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:26.642000 audit[5422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd3b7b490 a2=0 a3=1 items=0 ppid=3860 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:26.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:26.647000 audit[5422]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:26.647000 audit[5422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd3b7b490 a2=0 a3=1 items=0 ppid=3860 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:26.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:26.921035 systemd-networkd[1754]: cali9ec7e9bfcff: Gained IPv6LL Dec 16 12:15:27.482086 containerd[2156]: time="2025-12-16T12:15:27.481925892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcxhs,Uid:c05ee2d8-b800-46f3-8cb2-2945d50dda66,Namespace:kube-system,Attempt:0,}" Dec 16 12:15:27.482086 containerd[2156]: time="2025-12-16T12:15:27.482000579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lrs86,Uid:77a7712d-2394-4a4f-8873-2dd27305d176,Namespace:calico-system,Attempt:0,}" Dec 16 12:15:27.482469 containerd[2156]: time="2025-12-16T12:15:27.481949542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74755c97b7-lhmmb,Uid:94cf7c82-e4b2-4a6c-9fc8-83e906d8394a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:15:27.615147 kubelet[3702]: E1216 12:15:27.615007 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:15:27.615785 kubelet[3702]: E1216 12:15:27.615303 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:15:27.664945 systemd-networkd[1754]: cali17c95745f37: Link UP Dec 16 12:15:27.666165 systemd-networkd[1754]: cali17c95745f37: Gained carrier Dec 16 12:15:27.669000 audit[5487]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:27.669000 audit[5487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9fc0510 a2=0 a3=1 items=0 ppid=3860 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.669000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:27.675000 audit[5487]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:27.675000 audit[5487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc9fc0510 a2=0 a3=1 items=0 ppid=3860 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.566 [INFO][5427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0 csi-node-driver- calico-system 77a7712d-2394-4a4f-8873-2dd27305d176 707 0 2025-12-16 12:15:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-a-4d45b340a5 csi-node-driver-lrs86 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali17c95745f37 [] [] }} ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Namespace="calico-system" Pod="csi-node-driver-lrs86" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.567 [INFO][5427] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Namespace="calico-system" Pod="csi-node-driver-lrs86" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.594 [INFO][5466] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" HandleID="k8s-pod-network.15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Workload="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.594 [INFO][5466] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" HandleID="k8s-pod-network.15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Workload="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-4d45b340a5", "pod":"csi-node-driver-lrs86", "timestamp":"2025-12-16 12:15:27.594546897 +0000 UTC"}, Hostname:"ci-4547.0.0-a-4d45b340a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.594 [INFO][5466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.594 [INFO][5466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.594 [INFO][5466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-4d45b340a5' Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.601 [INFO][5466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.606 [INFO][5466] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.612 [INFO][5466] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.616 [INFO][5466] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.623 [INFO][5466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.624 [INFO][5466] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.627 [INFO][5466] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24 Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.633 [INFO][5466] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.645 [INFO][5466] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.134/26] block=192.168.65.128/26 handle="k8s-pod-network.15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.646 [INFO][5466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.134/26] handle="k8s-pod-network.15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.646 [INFO][5466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:27.682659 containerd[2156]: 2025-12-16 12:15:27.646 [INFO][5466] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.134/26] IPv6=[] ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" HandleID="k8s-pod-network.15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Workload="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" Dec 16 12:15:27.683036 containerd[2156]: 2025-12-16 12:15:27.649 [INFO][5427] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Namespace="calico-system" Pod="csi-node-driver-lrs86" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77a7712d-2394-4a4f-8873-2dd27305d176", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"", Pod:"csi-node-driver-lrs86", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali17c95745f37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:27.683036 containerd[2156]: 2025-12-16 12:15:27.649 [INFO][5427] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.134/32] ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Namespace="calico-system" Pod="csi-node-driver-lrs86" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" Dec 16 12:15:27.683036 containerd[2156]: 2025-12-16 12:15:27.649 [INFO][5427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17c95745f37 ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Namespace="calico-system" Pod="csi-node-driver-lrs86" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" Dec 16 12:15:27.683036 containerd[2156]: 2025-12-16 12:15:27.667 [INFO][5427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Namespace="calico-system" Pod="csi-node-driver-lrs86" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" Dec 16 12:15:27.683036 containerd[2156]: 2025-12-16 12:15:27.668 [INFO][5427] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Namespace="calico-system" Pod="csi-node-driver-lrs86" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77a7712d-2394-4a4f-8873-2dd27305d176", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24", Pod:"csi-node-driver-lrs86", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali17c95745f37", MAC:"7a:5b:cb:fe:8f:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:27.683036 containerd[2156]: 2025-12-16 12:15:27.679 [INFO][5427] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" Namespace="calico-system" Pod="csi-node-driver-lrs86" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-csi--node--driver--lrs86-eth0" Dec 16 12:15:27.688793 systemd-networkd[1754]: calic59047bc8ee: Gained IPv6LL Dec 16 12:15:27.691000 audit[5494]: NETFILTER_CFG table=filter:142 family=2 entries=52 op=nft_register_chain pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:27.691000 audit[5494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24328 a0=3 a1=ffffd1603250 a2=0 a3=ffffa2e41fa8 items=0 ppid=4846 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.691000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:27.735567 containerd[2156]: time="2025-12-16T12:15:27.735414152Z" level=info msg="connecting to shim 15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24" address="unix:///run/containerd/s/288ec103883959458830fc9d2b6da77ff3d1ffb63525e71718b6d614b20e0367" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:27.750903 systemd-networkd[1754]: cali46835c0dca3: Link UP Dec 16 12:15:27.752921 systemd-networkd[1754]: cali46835c0dca3: Gained carrier Dec 16 12:15:27.773282 systemd[1]: Started cri-containerd-15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24.scope - libcontainer container 15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24. Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.559 [INFO][5423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0 coredns-674b8bbfcf- kube-system c05ee2d8-b800-46f3-8cb2-2945d50dda66 817 0 2025-12-16 12:14:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-4d45b340a5 coredns-674b8bbfcf-dcxhs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46835c0dca3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcxhs" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.559 [INFO][5423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcxhs" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.606 [INFO][5464] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" HandleID="k8s-pod-network.e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Workload="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.606 [INFO][5464] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" HandleID="k8s-pod-network.e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Workload="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-4d45b340a5", "pod":"coredns-674b8bbfcf-dcxhs", "timestamp":"2025-12-16 12:15:27.606086492 +0000 UTC"}, Hostname:"ci-4547.0.0-a-4d45b340a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.606 [INFO][5464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.646 [INFO][5464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.646 [INFO][5464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-4d45b340a5' Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.702 [INFO][5464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.707 [INFO][5464] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.717 [INFO][5464] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.721 [INFO][5464] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.726 [INFO][5464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.726 [INFO][5464] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.727 [INFO][5464] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.732 [INFO][5464] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.742 [INFO][5464] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.135/26] block=192.168.65.128/26 handle="k8s-pod-network.e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.742 [INFO][5464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.135/26] handle="k8s-pod-network.e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.742 [INFO][5464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:27.776111 containerd[2156]: 2025-12-16 12:15:27.742 [INFO][5464] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.135/26] IPv6=[] ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" HandleID="k8s-pod-network.e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Workload="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" Dec 16 12:15:27.777121 containerd[2156]: 2025-12-16 12:15:27.748 [INFO][5423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcxhs" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c05ee2d8-b800-46f3-8cb2-2945d50dda66", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"", Pod:"coredns-674b8bbfcf-dcxhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46835c0dca3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:27.777121 containerd[2156]: 2025-12-16 12:15:27.748 [INFO][5423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.135/32] ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcxhs" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" Dec 16 12:15:27.777121 containerd[2156]: 2025-12-16 12:15:27.748 [INFO][5423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46835c0dca3 ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcxhs" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" Dec 16 12:15:27.777121 containerd[2156]: 2025-12-16 12:15:27.755 [INFO][5423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcxhs" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" Dec 16 12:15:27.777121 containerd[2156]: 2025-12-16 12:15:27.756 [INFO][5423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcxhs" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c05ee2d8-b800-46f3-8cb2-2945d50dda66", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea", Pod:"coredns-674b8bbfcf-dcxhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46835c0dca3", MAC:"ba:c8:6c:47:5c:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:27.777121 containerd[2156]: 2025-12-16 12:15:27.772 [INFO][5423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcxhs" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-coredns--674b8bbfcf--dcxhs-eth0" Dec 16 12:15:27.785000 audit: BPF prog-id=260 op=LOAD Dec 16 12:15:27.786000 audit: BPF prog-id=261 op=LOAD Dec 16 12:15:27.786000 audit[5516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5505 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613062656266663466333033666131313964613862313965353563 Dec 16 12:15:27.787000 audit: BPF prog-id=261 op=UNLOAD Dec 16 12:15:27.787000 audit[5516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5505 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613062656266663466333033666131313964613862313965353563 Dec 16 12:15:27.787000 audit: BPF prog-id=262 op=LOAD Dec 16 12:15:27.787000 audit[5516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5505 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613062656266663466333033666131313964613862313965353563 Dec 16 12:15:27.787000 audit: BPF prog-id=263 op=LOAD Dec 16 12:15:27.787000 audit[5516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5505 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613062656266663466333033666131313964613862313965353563 Dec 16 12:15:27.787000 audit: BPF prog-id=263 op=UNLOAD Dec 16 12:15:27.787000 audit[5516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5505 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613062656266663466333033666131313964613862313965353563 Dec 16 12:15:27.787000 audit: BPF prog-id=262 op=UNLOAD Dec 16 12:15:27.787000 audit[5516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5505 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613062656266663466333033666131313964613862313965353563 Dec 16 12:15:27.787000 audit: BPF prog-id=264 op=LOAD Dec 16 12:15:27.787000 audit[5516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5505 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135613062656266663466333033666131313964613862313965353563 Dec 16 12:15:27.793000 audit[5543]: NETFILTER_CFG table=filter:143 family=2 entries=52 op=nft_register_chain pid=5543 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:27.793000 audit[5543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23908 a0=3 a1=ffffd1b8b6d0 a2=0 a3=ffffb1c99fa8 items=0 ppid=4846 pid=5543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.793000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:27.830131 containerd[2156]: time="2025-12-16T12:15:27.830080320Z" level=info msg="connecting to shim e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea" address="unix:///run/containerd/s/5f2347b6536f11b6c0bcf6f952427c01815b62c378f1cb2ec0d2d65c78391be8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:27.830937 containerd[2156]: time="2025-12-16T12:15:27.830908610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lrs86,Uid:77a7712d-2394-4a4f-8873-2dd27305d176,Namespace:calico-system,Attempt:0,} returns sandbox id \"15a0bebff4f303fa119da8b19e55c0b87d973be6c0917ee2069cad997645cf24\"" Dec 16 12:15:27.833702 containerd[2156]: time="2025-12-16T12:15:27.833648790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:15:27.852074 systemd-networkd[1754]: cali6a5d11fe56a: Link UP Dec 16 12:15:27.853092 systemd-networkd[1754]: cali6a5d11fe56a: Gained carrier Dec 16 12:15:27.870669 systemd[1]: Started cri-containerd-e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea.scope - libcontainer container e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea. Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.576 [INFO][5445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0 calico-apiserver-74755c97b7- calico-apiserver 94cf7c82-e4b2-4a6c-9fc8-83e906d8394a 820 0 2025-12-16 12:14:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74755c97b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-4d45b340a5 calico-apiserver-74755c97b7-lhmmb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a5d11fe56a [] [] }} ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-lhmmb" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.576 [INFO][5445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-lhmmb" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.628 [INFO][5476] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" HandleID="k8s-pod-network.5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Workload="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.629 [INFO][5476] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" HandleID="k8s-pod-network.5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Workload="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002caf20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-4d45b340a5", "pod":"calico-apiserver-74755c97b7-lhmmb", "timestamp":"2025-12-16 12:15:27.628855534 +0000 UTC"}, Hostname:"ci-4547.0.0-a-4d45b340a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.629 [INFO][5476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.744 [INFO][5476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.744 [INFO][5476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-4d45b340a5' Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.803 [INFO][5476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.808 [INFO][5476] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.815 [INFO][5476] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.817 [INFO][5476] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.825 [INFO][5476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.825 [INFO][5476] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.827 [INFO][5476] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2 Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.834 [INFO][5476] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.845 [INFO][5476] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.136/26] block=192.168.65.128/26 handle="k8s-pod-network.5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.845 [INFO][5476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.136/26] handle="k8s-pod-network.5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" host="ci-4547.0.0-a-4d45b340a5" Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.845 [INFO][5476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:15:27.873430 containerd[2156]: 2025-12-16 12:15:27.845 [INFO][5476] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.136/26] IPv6=[] ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" HandleID="k8s-pod-network.5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Workload="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" Dec 16 12:15:27.874017 containerd[2156]: 2025-12-16 12:15:27.848 [INFO][5445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-lhmmb" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0", GenerateName:"calico-apiserver-74755c97b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"94cf7c82-e4b2-4a6c-9fc8-83e906d8394a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74755c97b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"", Pod:"calico-apiserver-74755c97b7-lhmmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a5d11fe56a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:27.874017 containerd[2156]: 2025-12-16 12:15:27.848 [INFO][5445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.136/32] ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-lhmmb" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" Dec 16 12:15:27.874017 containerd[2156]: 2025-12-16 12:15:27.848 [INFO][5445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a5d11fe56a ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-lhmmb" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" Dec 16 12:15:27.874017 containerd[2156]: 2025-12-16 12:15:27.855 [INFO][5445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-lhmmb" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" Dec 16 12:15:27.874017 containerd[2156]: 2025-12-16 12:15:27.856 [INFO][5445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-lhmmb" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0", GenerateName:"calico-apiserver-74755c97b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"94cf7c82-e4b2-4a6c-9fc8-83e906d8394a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74755c97b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-4d45b340a5", ContainerID:"5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2", Pod:"calico-apiserver-74755c97b7-lhmmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a5d11fe56a", MAC:"0e:95:75:62:4b:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:15:27.874017 containerd[2156]: 2025-12-16 12:15:27.868 [INFO][5445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" Namespace="calico-apiserver" Pod="calico-apiserver-74755c97b7-lhmmb" WorkloadEndpoint="ci--4547.0.0--a--4d45b340a5-k8s-calico--apiserver--74755c97b7--lhmmb-eth0" Dec 16 12:15:27.882000 audit: BPF prog-id=265 op=LOAD Dec 16 12:15:27.883000 audit: BPF prog-id=266 op=LOAD Dec 16 12:15:27.883000 audit[5571]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5559 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533316364383939353731353937643963646565323934326265326637 Dec 16 12:15:27.883000 audit: BPF prog-id=266 op=UNLOAD Dec 16 12:15:27.883000 audit[5571]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5559 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533316364383939353731353937643963646565323934326265326637 Dec 16 12:15:27.883000 audit: BPF prog-id=267 op=LOAD Dec 16 12:15:27.883000 audit[5571]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5559 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533316364383939353731353937643963646565323934326265326637 Dec 16 12:15:27.883000 audit: BPF prog-id=268 op=LOAD Dec 16 12:15:27.883000 audit[5571]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5559 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533316364383939353731353937643963646565323934326265326637 Dec 16 12:15:27.883000 audit: BPF prog-id=268 op=UNLOAD Dec 16 12:15:27.883000 audit[5571]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5559 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533316364383939353731353937643963646565323934326265326637 Dec 16 12:15:27.883000 audit: BPF prog-id=267 op=UNLOAD Dec 16 12:15:27.883000 audit[5571]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5559 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533316364383939353731353937643963646565323934326265326637 Dec 16 12:15:27.883000 audit: BPF prog-id=269 op=LOAD Dec 16 12:15:27.883000 audit[5571]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5559 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533316364383939353731353937643963646565323934326265326637 Dec 16 12:15:27.896000 audit[5599]: NETFILTER_CFG table=filter:144 family=2 entries=61 op=nft_register_chain pid=5599 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:15:27.896000 audit[5599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29016 a0=3 a1=ffffe7cb7bd0 a2=0 a3=ffffa995cfa8 items=0 ppid=4846 pid=5599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.896000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:15:27.911629 containerd[2156]: time="2025-12-16T12:15:27.911599238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcxhs,Uid:c05ee2d8-b800-46f3-8cb2-2945d50dda66,Namespace:kube-system,Attempt:0,} returns sandbox id \"e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea\"" Dec 16 12:15:27.915878 containerd[2156]: time="2025-12-16T12:15:27.915831775Z" level=info msg="connecting to shim 5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2" address="unix:///run/containerd/s/965ec0b61fc2dec790c9412c29dbc451c1b19457fa8260cbaaad28a2b9edfd37" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:15:27.921927 containerd[2156]: time="2025-12-16T12:15:27.921888818Z" level=info msg="CreateContainer within sandbox \"e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:15:27.936651 systemd[1]: Started cri-containerd-5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2.scope - libcontainer container 5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2. Dec 16 12:15:27.947000 audit: BPF prog-id=270 op=LOAD Dec 16 12:15:27.947000 audit: BPF prog-id=271 op=LOAD Dec 16 12:15:27.947000 audit[5627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5616 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537373134303766346331616539313030653637616430333333363331 Dec 16 12:15:27.947000 audit: BPF prog-id=271 op=UNLOAD Dec 16 12:15:27.947000 audit[5627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5616 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537373134303766346331616539313030653637616430333333363331 Dec 16 12:15:27.948000 audit: BPF prog-id=272 op=LOAD Dec 16 12:15:27.948000 audit[5627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5616 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537373134303766346331616539313030653637616430333333363331 Dec 16 12:15:27.948000 audit: BPF prog-id=273 op=LOAD Dec 16 12:15:27.948000 audit[5627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5616 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537373134303766346331616539313030653637616430333333363331 Dec 16 12:15:27.948000 audit: BPF prog-id=273 op=UNLOAD Dec 16 12:15:27.948000 audit[5627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5616 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537373134303766346331616539313030653637616430333333363331 Dec 16 12:15:27.948000 audit: BPF prog-id=272 op=UNLOAD Dec 16 12:15:27.948000 audit[5627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5616 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537373134303766346331616539313030653637616430333333363331 Dec 16 12:15:27.948000 audit: BPF prog-id=274 op=LOAD Dec 16 12:15:27.948000 audit[5627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5616 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:27.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537373134303766346331616539313030653637616430333333363331 Dec 16 12:15:27.952863 containerd[2156]: time="2025-12-16T12:15:27.952826771Z" level=info msg="Container 96d092f45b1bdf13639f4cffc9c46d8e4a270791f6ff27cc3a31c4ca35a69481: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:15:27.970611 containerd[2156]: time="2025-12-16T12:15:27.970506944Z" level=info msg="CreateContainer within sandbox \"e31cd899571597d9cdee2942be2f744bb9acf0dcd96adc6bf6d0b12654a725ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96d092f45b1bdf13639f4cffc9c46d8e4a270791f6ff27cc3a31c4ca35a69481\"" Dec 16 12:15:27.972547 containerd[2156]: time="2025-12-16T12:15:27.971645837Z" level=info msg="StartContainer for \"96d092f45b1bdf13639f4cffc9c46d8e4a270791f6ff27cc3a31c4ca35a69481\"" Dec 16 12:15:27.972756 containerd[2156]: time="2025-12-16T12:15:27.972711236Z" level=info msg="connecting to shim 96d092f45b1bdf13639f4cffc9c46d8e4a270791f6ff27cc3a31c4ca35a69481" address="unix:///run/containerd/s/5f2347b6536f11b6c0bcf6f952427c01815b62c378f1cb2ec0d2d65c78391be8" protocol=ttrpc version=3 Dec 16 12:15:28.009797 systemd[1]: Started cri-containerd-96d092f45b1bdf13639f4cffc9c46d8e4a270791f6ff27cc3a31c4ca35a69481.scope - libcontainer container 96d092f45b1bdf13639f4cffc9c46d8e4a270791f6ff27cc3a31c4ca35a69481. Dec 16 12:15:28.023000 audit: BPF prog-id=275 op=LOAD Dec 16 12:15:28.024000 audit: BPF prog-id=276 op=LOAD Dec 16 12:15:28.024000 audit[5646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5559 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:28.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643039326634356231626466313336333966346366666339633436 Dec 16 12:15:28.024000 audit: BPF prog-id=276 op=UNLOAD Dec 16 12:15:28.024000 audit[5646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5559 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:28.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643039326634356231626466313336333966346366666339633436 Dec 16 12:15:28.024000 audit: BPF prog-id=277 op=LOAD Dec 16 12:15:28.024000 audit[5646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5559 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:28.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643039326634356231626466313336333966346366666339633436 Dec 16 12:15:28.024000 audit: BPF prog-id=278 op=LOAD Dec 16 12:15:28.024000 audit[5646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5559 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:28.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643039326634356231626466313336333966346366666339633436 Dec 16 12:15:28.024000 audit: BPF prog-id=278 op=UNLOAD Dec 16 12:15:28.024000 audit[5646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5559 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:28.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643039326634356231626466313336333966346366666339633436 Dec 16 12:15:28.024000 audit: BPF prog-id=277 op=UNLOAD Dec 16 12:15:28.024000 audit[5646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5559 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:28.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643039326634356231626466313336333966346366666339633436 Dec 16 12:15:28.025000 audit: BPF prog-id=279 op=LOAD Dec 16 12:15:28.025000 audit[5646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5559 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:28.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643039326634356231626466313336333966346366666339633436 Dec 16 12:15:28.054167 containerd[2156]: time="2025-12-16T12:15:28.054058683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74755c97b7-lhmmb,Uid:94cf7c82-e4b2-4a6c-9fc8-83e906d8394a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5771407f4c1ae9100e67ad03336317ae42e1d31cbde692853b009dfae75734e2\"" Dec 16 12:15:28.055247 containerd[2156]: time="2025-12-16T12:15:28.055219770Z" level=info msg="StartContainer for \"96d092f45b1bdf13639f4cffc9c46d8e4a270791f6ff27cc3a31c4ca35a69481\" returns successfully" Dec 16 12:15:28.104469 containerd[2156]: time="2025-12-16T12:15:28.104421348Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:28.109114 containerd[2156]: time="2025-12-16T12:15:28.108977210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:15:28.109232 containerd[2156]: time="2025-12-16T12:15:28.109088275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:28.109666 kubelet[3702]: E1216 12:15:28.109615 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:15:28.109790 kubelet[3702]: E1216 12:15:28.109678 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:15:28.110523 kubelet[3702]: E1216 12:15:28.110203 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:28.110799 containerd[2156]: time="2025-12-16T12:15:28.110771881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:15:28.362934 containerd[2156]: time="2025-12-16T12:15:28.362885939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:28.366534 containerd[2156]: time="2025-12-16T12:15:28.366471979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:15:28.366669 containerd[2156]: time="2025-12-16T12:15:28.366485572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:28.366763 kubelet[3702]: E1216 12:15:28.366710 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:28.366821 kubelet[3702]: E1216 12:15:28.366773 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:28.367071 containerd[2156]: time="2025-12-16T12:15:28.367047438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:15:28.367360 kubelet[3702]: E1216 12:15:28.367306 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ltkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74755c97b7-lhmmb_calico-apiserver(94cf7c82-e4b2-4a6c-9fc8-83e906d8394a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:28.368773 kubelet[3702]: E1216 12:15:28.368696 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:15:28.622427 containerd[2156]: time="2025-12-16T12:15:28.622200262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:28.625826 containerd[2156]: time="2025-12-16T12:15:28.625709167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:15:28.626154 containerd[2156]: time="2025-12-16T12:15:28.625925242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:28.628656 kubelet[3702]: E1216 12:15:28.627674 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:15:28.629640 kubelet[3702]: E1216 12:15:28.628986 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:15:28.629640 kubelet[3702]: E1216 12:15:28.629025 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:15:28.629640 kubelet[3702]: E1216 12:15:28.629099 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:28.631107 kubelet[3702]: E1216 12:15:28.631041 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:28.658313 kubelet[3702]: I1216 12:15:28.657628 3702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dcxhs" podStartSLOduration=44.657613013 podStartE2EDuration="44.657613013s" podCreationTimestamp="2025-12-16 12:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:28.656528269 +0000 UTC m=+50.263145572" watchObservedRunningTime="2025-12-16 12:15:28.657613013 +0000 UTC m=+50.264230260" Dec 16 12:15:28.679000 audit[5685]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5685 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:28.679000 audit[5685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc4882f70 a2=0 a3=1 items=0 ppid=3860 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:28.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:28.690000 audit[5685]: NETFILTER_CFG table=nat:146 family=2 entries=44 op=nft_register_rule pid=5685 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:28.690000 audit[5685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc4882f70 a2=0 a3=1 items=0 ppid=3860 pid=5685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:28.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:29.160723 systemd-networkd[1754]: cali46835c0dca3: Gained IPv6LL Dec 16 12:15:29.481632 systemd-networkd[1754]: cali6a5d11fe56a: Gained IPv6LL Dec 16 12:15:29.628284 kubelet[3702]: E1216 12:15:29.628144 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:15:29.630423 kubelet[3702]: E1216 12:15:29.630332 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:29.672706 systemd-networkd[1754]: cali17c95745f37: Gained IPv6LL Dec 16 12:15:29.707000 audit[5687]: NETFILTER_CFG table=filter:147 family=2 entries=14 op=nft_register_rule pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:29.723508 kernel: kauditd_printk_skb: 214 callbacks suppressed Dec 16 12:15:29.723639 kernel: audit: type=1325 audit(1765887329.707:768): table=filter:147 family=2 entries=14 op=nft_register_rule pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:29.707000 audit[5687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcf08c040 a2=0 a3=1 items=0 ppid=3860 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:29.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:29.754759 kernel: audit: type=1300 audit(1765887329.707:768): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcf08c040 a2=0 a3=1 items=0 ppid=3860 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:29.754924 kernel: audit: type=1327 audit(1765887329.707:768): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:29.755000 audit[5687]: NETFILTER_CFG table=nat:148 family=2 entries=56 op=nft_register_chain pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:29.755000 audit[5687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcf08c040 a2=0 a3=1 items=0 ppid=3860 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:29.782806 kernel: audit: type=1325 audit(1765887329.755:769): table=nat:148 family=2 entries=56 op=nft_register_chain pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:15:29.782900 kernel: audit: type=1300 audit(1765887329.755:769): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcf08c040 a2=0 a3=1 items=0 ppid=3860 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:29.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:29.792743 kernel: audit: type=1327 audit(1765887329.755:769): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:15:35.483894 containerd[2156]: time="2025-12-16T12:15:35.483087030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:15:35.796347 containerd[2156]: time="2025-12-16T12:15:35.796078174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:35.799748 containerd[2156]: time="2025-12-16T12:15:35.799706451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:15:35.800520 containerd[2156]: time="2025-12-16T12:15:35.799854608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:35.800594 kubelet[3702]: E1216 12:15:35.800057 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:15:35.800594 kubelet[3702]: E1216 12:15:35.800117 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:15:35.800594 kubelet[3702]: E1216 12:15:35.800212 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b28635fdc3bf4ca29f7ba01fb74863fe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwb7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98b9677f6-82g96_calico-system(738872a1-0466-4442-a71b-b4f7bae6b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:35.802754 containerd[2156]: time="2025-12-16T12:15:35.802726578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:15:36.024905 containerd[2156]: time="2025-12-16T12:15:36.024857179Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:36.028296 containerd[2156]: time="2025-12-16T12:15:36.028251044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:15:36.028380 containerd[2156]: time="2025-12-16T12:15:36.028248660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:36.029399 kubelet[3702]: E1216 12:15:36.028556 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:15:36.029399 kubelet[3702]: E1216 12:15:36.028613 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:15:36.029399 kubelet[3702]: E1216 12:15:36.028712 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwb7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98b9677f6-82g96_calico-system(738872a1-0466-4442-a71b-b4f7bae6b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:36.029939 kubelet[3702]: E1216 12:15:36.029903 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:15:38.482816 containerd[2156]: time="2025-12-16T12:15:38.482744024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:15:38.735198 containerd[2156]: time="2025-12-16T12:15:38.734949683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:38.738379 containerd[2156]: time="2025-12-16T12:15:38.738343624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:15:38.738450 containerd[2156]: time="2025-12-16T12:15:38.738422909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:38.739155 kubelet[3702]: E1216 12:15:38.738598 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:15:38.739155 kubelet[3702]: E1216 12:15:38.738660 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:15:38.739155 kubelet[3702]: E1216 12:15:38.738773 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fblkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547b944668-qhgzg_calico-system(7de63821-b623-478a-a40e-6502071e35ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:38.740370 kubelet[3702]: E1216 12:15:38.740265 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:15:41.482240 containerd[2156]: time="2025-12-16T12:15:41.481971630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:15:41.756978 containerd[2156]: time="2025-12-16T12:15:41.756837741Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:41.760413 containerd[2156]: time="2025-12-16T12:15:41.760314075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:15:41.760413 containerd[2156]: time="2025-12-16T12:15:41.760369000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:41.760636 kubelet[3702]: E1216 12:15:41.760599 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:15:41.760862 kubelet[3702]: E1216 12:15:41.760643 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:15:41.760885 kubelet[3702]: E1216 12:15:41.760769 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgdkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7x78m_calico-system(c365101f-0c2a-4266-abb7-2136287ff3ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:41.763188 kubelet[3702]: E1216 12:15:41.762935 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:15:42.485110 containerd[2156]: time="2025-12-16T12:15:42.484964142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:15:42.765463 containerd[2156]: time="2025-12-16T12:15:42.765324735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:42.768693 containerd[2156]: time="2025-12-16T12:15:42.768647080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:15:42.768850 containerd[2156]: time="2025-12-16T12:15:42.768653369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:42.768911 kubelet[3702]: E1216 12:15:42.768867 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:15:42.769441 kubelet[3702]: E1216 12:15:42.768915 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:15:42.769441 kubelet[3702]: E1216 12:15:42.769168 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:42.769922 containerd[2156]: time="2025-12-16T12:15:42.769628051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:15:43.033573 containerd[2156]: time="2025-12-16T12:15:43.033222585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:43.037102 containerd[2156]: time="2025-12-16T12:15:43.037062854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:15:43.037298 containerd[2156]: time="2025-12-16T12:15:43.037150062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:43.037359 kubelet[3702]: E1216 12:15:43.037318 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:43.037398 kubelet[3702]: E1216 12:15:43.037369 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:43.037895 containerd[2156]: time="2025-12-16T12:15:43.037655224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:15:43.037969 kubelet[3702]: E1216 12:15:43.037789 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jl6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74755c97b7-c55sp_calico-apiserver(e0f8af17-5d0e-41d3-8143-e682bcff58c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:43.039581 kubelet[3702]: E1216 12:15:43.039555 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:15:43.354955 containerd[2156]: time="2025-12-16T12:15:43.354907545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:43.359831 containerd[2156]: time="2025-12-16T12:15:43.359781862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:15:43.359970 containerd[2156]: time="2025-12-16T12:15:43.359873245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:43.360051 kubelet[3702]: E1216 12:15:43.360015 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:43.360108 kubelet[3702]: E1216 12:15:43.360059 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:15:43.360408 kubelet[3702]: E1216 12:15:43.360327 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ltkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74755c97b7-lhmmb_calico-apiserver(94cf7c82-e4b2-4a6c-9fc8-83e906d8394a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:43.360727 containerd[2156]: time="2025-12-16T12:15:43.360624005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:15:43.361634 kubelet[3702]: E1216 12:15:43.361603 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:15:43.604166 containerd[2156]: time="2025-12-16T12:15:43.604118607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:43.607665 containerd[2156]: time="2025-12-16T12:15:43.607471235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:15:43.607665 containerd[2156]: time="2025-12-16T12:15:43.607586316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:43.607946 kubelet[3702]: E1216 12:15:43.607730 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:15:43.607946 kubelet[3702]: E1216 12:15:43.607777 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:15:43.607946 kubelet[3702]: E1216 12:15:43.608109 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:43.609290 kubelet[3702]: E1216 12:15:43.609244 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:48.483645 kubelet[3702]: E1216 12:15:48.483537 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:15:52.484014 kubelet[3702]: E1216 12:15:52.483952 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:15:53.481631 kubelet[3702]: E1216 12:15:53.481575 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:15:56.484587 kubelet[3702]: E1216 12:15:56.483428 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:15:57.482490 kubelet[3702]: E1216 12:15:57.482441 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:15:58.485156 kubelet[3702]: E1216 12:15:58.485089 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:15:59.484686 containerd[2156]: time="2025-12-16T12:15:59.484582107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:15:59.763925 containerd[2156]: time="2025-12-16T12:15:59.763580572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:15:59.769166 containerd[2156]: time="2025-12-16T12:15:59.769019947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:15:59.769166 containerd[2156]: time="2025-12-16T12:15:59.769114818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:15:59.770273 kubelet[3702]: E1216 12:15:59.769417 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:15:59.770273 kubelet[3702]: E1216 12:15:59.769462 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:15:59.770273 kubelet[3702]: E1216 12:15:59.769613 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b28635fdc3bf4ca29f7ba01fb74863fe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwb7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98b9677f6-82g96_calico-system(738872a1-0466-4442-a71b-b4f7bae6b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:15:59.772672 containerd[2156]: time="2025-12-16T12:15:59.772575820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:16:00.035284 containerd[2156]: time="2025-12-16T12:16:00.035145546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:00.039320 containerd[2156]: time="2025-12-16T12:16:00.039277809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:16:00.039417 containerd[2156]: time="2025-12-16T12:16:00.039366952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:00.039556 kubelet[3702]: E1216 12:16:00.039518 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:00.039600 kubelet[3702]: E1216 12:16:00.039567 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:00.039698 kubelet[3702]: E1216 12:16:00.039666 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwb7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98b9677f6-82g96_calico-system(738872a1-0466-4442-a71b-b4f7bae6b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:00.041111 kubelet[3702]: E1216 12:16:00.041067 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:16:07.483023 containerd[2156]: time="2025-12-16T12:16:07.482978990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:16:07.958612 containerd[2156]: time="2025-12-16T12:16:07.958558397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:07.964432 containerd[2156]: time="2025-12-16T12:16:07.964393470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:16:07.964529 containerd[2156]: time="2025-12-16T12:16:07.964489782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:07.964703 kubelet[3702]: E1216 12:16:07.964653 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:07.964977 kubelet[3702]: E1216 12:16:07.964715 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:07.964977 kubelet[3702]: E1216 12:16:07.964890 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fblkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547b944668-qhgzg_calico-system(7de63821-b623-478a-a40e-6502071e35ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:07.965459 containerd[2156]: time="2025-12-16T12:16:07.965408612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:16:07.966002 kubelet[3702]: E1216 12:16:07.965972 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:16:08.200880 containerd[2156]: time="2025-12-16T12:16:08.200820177Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:08.204549 containerd[2156]: time="2025-12-16T12:16:08.204505757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:16:08.204625 containerd[2156]: time="2025-12-16T12:16:08.204590187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:08.204825 kubelet[3702]: E1216 12:16:08.204782 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:16:08.204895 kubelet[3702]: E1216 12:16:08.204834 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:16:08.205291 kubelet[3702]: E1216 12:16:08.204950 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgdkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7x78m_calico-system(c365101f-0c2a-4266-abb7-2136287ff3ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:08.206103 kubelet[3702]: E1216 12:16:08.206080 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:16:08.484551 containerd[2156]: time="2025-12-16T12:16:08.484434499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:16:08.780947 containerd[2156]: time="2025-12-16T12:16:08.780592835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:08.784098 containerd[2156]: time="2025-12-16T12:16:08.784010010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:16:08.784098 containerd[2156]: time="2025-12-16T12:16:08.784065910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:08.784377 kubelet[3702]: E1216 12:16:08.784327 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:08.784462 kubelet[3702]: E1216 12:16:08.784382 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:08.785440 kubelet[3702]: E1216 12:16:08.784516 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jl6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74755c97b7-c55sp_calico-apiserver(e0f8af17-5d0e-41d3-8143-e682bcff58c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:08.785631 kubelet[3702]: E1216 12:16:08.785611 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:16:09.486091 containerd[2156]: time="2025-12-16T12:16:09.484257766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:16:09.736753 containerd[2156]: time="2025-12-16T12:16:09.736618620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:09.762813 containerd[2156]: time="2025-12-16T12:16:09.762742545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:16:09.762966 containerd[2156]: time="2025-12-16T12:16:09.762860809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:09.763184 kubelet[3702]: E1216 12:16:09.763134 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:16:09.763461 kubelet[3702]: E1216 12:16:09.763201 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:16:09.763461 kubelet[3702]: E1216 12:16:09.763435 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:09.765814 containerd[2156]: time="2025-12-16T12:16:09.765783341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:16:10.035117 containerd[2156]: time="2025-12-16T12:16:10.034561310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:10.038753 containerd[2156]: time="2025-12-16T12:16:10.038651886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:16:10.038753 containerd[2156]: time="2025-12-16T12:16:10.038703570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:10.038910 kubelet[3702]: E1216 12:16:10.038866 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:16:10.038953 kubelet[3702]: E1216 12:16:10.038916 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:16:10.039128 kubelet[3702]: E1216 12:16:10.039039 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:10.040175 kubelet[3702]: E1216 12:16:10.040138 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:16:11.482456 containerd[2156]: time="2025-12-16T12:16:11.482182800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:16:11.710196 containerd[2156]: time="2025-12-16T12:16:11.710143505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:11.717009 containerd[2156]: time="2025-12-16T12:16:11.716964153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:16:11.717113 containerd[2156]: time="2025-12-16T12:16:11.717047503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:11.717234 kubelet[3702]: E1216 12:16:11.717184 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:11.717658 kubelet[3702]: E1216 12:16:11.717233 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:11.717658 kubelet[3702]: E1216 12:16:11.717355 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ltkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74755c97b7-lhmmb_calico-apiserver(94cf7c82-e4b2-4a6c-9fc8-83e906d8394a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:11.718878 kubelet[3702]: E1216 12:16:11.718841 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:16:14.483710 kubelet[3702]: E1216 12:16:14.483607 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:16:19.482130 kubelet[3702]: E1216 12:16:19.482051 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:16:20.484166 kubelet[3702]: E1216 12:16:20.484083 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:16:21.482098 kubelet[3702]: E1216 12:16:21.482006 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:16:21.482838 kubelet[3702]: E1216 12:16:21.482793 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:16:23.968690 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:46950.service - OpenSSH per-connection server daemon (10.200.16.10:46950). Dec 16 12:16:23.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.37:22-10.200.16.10:46950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:23.985549 kernel: audit: type=1130 audit(1765887383.967:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.37:22-10.200.16.10:46950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:24.405000 audit[5784]: USER_ACCT pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.422903 sshd[5784]: Accepted publickey for core from 10.200.16.10 port 46950 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:16:24.421000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.424404 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:24.438729 kernel: audit: type=1101 audit(1765887384.405:771): pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.438847 kernel: audit: type=1103 audit(1765887384.421:772): pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.448294 kernel: audit: type=1006 audit(1765887384.422:773): pid=5784 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 12:16:24.422000 audit[5784]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeb44c8f0 a2=3 a3=0 items=0 ppid=1 pid=5784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:24.452345 systemd-logind[2131]: New session 11 of user core. Dec 16 12:16:24.466369 kernel: audit: type=1300 audit(1765887384.422:773): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeb44c8f0 a2=3 a3=0 items=0 ppid=1 pid=5784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:24.422000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:24.474247 kernel: audit: type=1327 audit(1765887384.422:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:24.476732 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:16:24.481000 audit[5784]: USER_START pid=5784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.500000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.517346 kernel: audit: type=1105 audit(1765887384.481:774): pid=5784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.517455 kernel: audit: type=1103 audit(1765887384.500:775): pid=5788 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.715046 sshd[5788]: Connection closed by 10.200.16.10 port 46950 Dec 16 12:16:24.715611 sshd-session[5784]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:24.715000 audit[5784]: USER_END pid=5784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.738627 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:46950.service: Deactivated successfully. Dec 16 12:16:24.715000 audit[5784]: CRED_DISP pid=5784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.747832 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:16:24.755765 kernel: audit: type=1106 audit(1765887384.715:776): pid=5784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.755860 kernel: audit: type=1104 audit(1765887384.715:777): pid=5784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:24.758527 systemd-logind[2131]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:16:24.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.37:22-10.200.16.10:46950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:24.764747 systemd-logind[2131]: Removed session 11. Dec 16 12:16:25.482302 kubelet[3702]: E1216 12:16:25.482268 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:16:27.484638 kubelet[3702]: E1216 12:16:27.484495 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:16:29.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.37:22-10.200.16.10:46966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:29.801228 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:46966.service - OpenSSH per-connection server daemon (10.200.16.10:46966). Dec 16 12:16:29.804311 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:16:29.804403 kernel: audit: type=1130 audit(1765887389.800:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.37:22-10.200.16.10:46966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:30.211000 audit[5803]: USER_ACCT pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.214657 sshd[5803]: Accepted publickey for core from 10.200.16.10 port 46966 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:16:30.227000 audit[5803]: CRED_ACQ pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.229479 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:30.242508 kernel: audit: type=1101 audit(1765887390.211:780): pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.242590 kernel: audit: type=1103 audit(1765887390.227:781): pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.252734 kernel: audit: type=1006 audit(1765887390.227:782): pid=5803 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 12:16:30.227000 audit[5803]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd481bd50 a2=3 a3=0 items=0 ppid=1 pid=5803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:30.269552 kernel: audit: type=1300 audit(1765887390.227:782): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd481bd50 a2=3 a3=0 items=0 ppid=1 pid=5803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:30.227000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:30.273823 systemd-logind[2131]: New session 12 of user core. Dec 16 12:16:30.276768 kernel: audit: type=1327 audit(1765887390.227:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:30.280014 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:16:30.282000 audit[5803]: USER_START pid=5803 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.303905 kernel: audit: type=1105 audit(1765887390.282:783): pid=5803 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.304019 kernel: audit: type=1103 audit(1765887390.302:784): pid=5807 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.302000 audit[5807]: CRED_ACQ pid=5807 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.492597 sshd[5807]: Connection closed by 10.200.16.10 port 46966 Dec 16 12:16:30.492851 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:30.494000 audit[5803]: USER_END pid=5803 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.514720 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:46966.service: Deactivated successfully. Dec 16 12:16:30.494000 audit[5803]: CRED_DISP pid=5803 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.515543 kernel: audit: type=1106 audit(1765887390.494:785): pid=5803 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.521689 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:16:30.532222 systemd-logind[2131]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:16:30.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.37:22-10.200.16.10:46966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:30.532574 kernel: audit: type=1104 audit(1765887390.494:786): pid=5803 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:30.533982 systemd-logind[2131]: Removed session 12. Dec 16 12:16:31.483636 kubelet[3702]: E1216 12:16:31.483355 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:16:34.484489 kubelet[3702]: E1216 12:16:34.484378 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:16:35.482544 kubelet[3702]: E1216 12:16:35.482497 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:16:35.482829 kubelet[3702]: E1216 12:16:35.482804 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:16:35.572691 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:51048.service - OpenSSH per-connection server daemon (10.200.16.10:51048). Dec 16 12:16:35.586220 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:16:35.586249 kernel: audit: type=1130 audit(1765887395.571:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.37:22-10.200.16.10:51048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.37:22-10.200.16.10:51048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.979000 audit[5821]: USER_ACCT pid=5821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:35.981169 sshd[5821]: Accepted publickey for core from 10.200.16.10 port 51048 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:16:36.000683 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:36.011810 systemd-logind[2131]: New session 13 of user core. Dec 16 12:16:35.998000 audit[5821]: CRED_ACQ pid=5821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.030115 kernel: audit: type=1101 audit(1765887395.979:789): pid=5821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.030199 kernel: audit: type=1103 audit(1765887395.998:790): pid=5821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.031629 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:16:36.046749 kernel: audit: type=1006 audit(1765887395.998:791): pid=5821 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 12:16:35.998000 audit[5821]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd74151b0 a2=3 a3=0 items=0 ppid=1 pid=5821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:36.065049 kernel: audit: type=1300 audit(1765887395.998:791): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd74151b0 a2=3 a3=0 items=0 ppid=1 pid=5821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:35.998000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:36.072533 kernel: audit: type=1327 audit(1765887395.998:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:36.034000 audit[5821]: USER_START pid=5821 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.093457 kernel: audit: type=1105 audit(1765887396.034:792): pid=5821 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.035000 audit[5825]: CRED_ACQ pid=5825 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.109263 kernel: audit: type=1103 audit(1765887396.035:793): pid=5825 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.273460 sshd[5825]: Connection closed by 10.200.16.10 port 51048 Dec 16 12:16:36.274387 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:36.274000 audit[5821]: USER_END pid=5821 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.278545 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:51048.service: Deactivated successfully. Dec 16 12:16:36.280543 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:16:36.274000 audit[5821]: CRED_DISP pid=5821 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.311064 kernel: audit: type=1106 audit(1765887396.274:794): pid=5821 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.311299 kernel: audit: type=1104 audit(1765887396.274:795): pid=5821 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.312311 systemd-logind[2131]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:16:36.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.37:22-10.200.16.10:51048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:36.314069 systemd-logind[2131]: Removed session 13. Dec 16 12:16:36.361171 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:51060.service - OpenSSH per-connection server daemon (10.200.16.10:51060). Dec 16 12:16:36.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.37:22-10.200.16.10:51060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:36.792000 audit[5838]: USER_ACCT pid=5838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.794510 sshd[5838]: Accepted publickey for core from 10.200.16.10 port 51060 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:16:36.793000 audit[5838]: CRED_ACQ pid=5838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.793000 audit[5838]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde953e00 a2=3 a3=0 items=0 ppid=1 pid=5838 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:36.793000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:36.796035 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:36.799890 systemd-logind[2131]: New session 14 of user core. Dec 16 12:16:36.807637 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:16:36.808000 audit[5838]: USER_START pid=5838 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:36.810000 audit[5842]: CRED_ACQ pid=5842 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:37.096019 sshd[5842]: Connection closed by 10.200.16.10 port 51060 Dec 16 12:16:37.095922 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:37.095000 audit[5838]: USER_END pid=5838 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:37.095000 audit[5838]: CRED_DISP pid=5838 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:37.099176 systemd-logind[2131]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:16:37.100295 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:51060.service: Deactivated successfully. Dec 16 12:16:37.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.37:22-10.200.16.10:51060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:37.102872 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:16:37.105012 systemd-logind[2131]: Removed session 14. Dec 16 12:16:37.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.37:22-10.200.16.10:51062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:37.181632 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:51062.service - OpenSSH per-connection server daemon (10.200.16.10:51062). Dec 16 12:16:37.483250 kubelet[3702]: E1216 12:16:37.482996 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:16:37.605707 sshd[5852]: Accepted publickey for core from 10.200.16.10 port 51062 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:16:37.603000 audit[5852]: USER_ACCT pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:37.604000 audit[5852]: CRED_ACQ pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:37.604000 audit[5852]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc8f13f0 a2=3 a3=0 items=0 ppid=1 pid=5852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:37.604000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:37.607019 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:37.612678 systemd-logind[2131]: New session 15 of user core. Dec 16 12:16:37.618647 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:16:37.621000 audit[5852]: USER_START pid=5852 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:37.622000 audit[5856]: CRED_ACQ pid=5856 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:37.978100 sshd[5856]: Connection closed by 10.200.16.10 port 51062 Dec 16 12:16:37.977308 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:38.036000 audit[5852]: USER_END pid=5852 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:38.036000 audit[5852]: CRED_DISP pid=5852 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:38.039988 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:51062.service: Deactivated successfully. Dec 16 12:16:38.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.37:22-10.200.16.10:51062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.042766 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:16:38.047236 systemd-logind[2131]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:16:38.050297 systemd-logind[2131]: Removed session 15. Dec 16 12:16:38.484192 kubelet[3702]: E1216 12:16:38.484145 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:16:43.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.37:22-10.200.16.10:60644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:43.060729 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:60644.service - OpenSSH per-connection server daemon (10.200.16.10:60644). Dec 16 12:16:43.063897 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:16:43.063981 kernel: audit: type=1130 audit(1765887403.060:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.37:22-10.200.16.10:60644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:43.471000 audit[5887]: USER_ACCT pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.488779 sshd[5887]: Accepted publickey for core from 10.200.16.10 port 60644 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:16:43.487000 audit[5887]: CRED_ACQ pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.489173 sshd-session[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:43.504027 kernel: audit: type=1101 audit(1765887403.471:816): pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.504103 kernel: audit: type=1103 audit(1765887403.487:817): pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.514190 kernel: audit: type=1006 audit(1765887403.487:818): pid=5887 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:16:43.487000 audit[5887]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5f79b50 a2=3 a3=0 items=0 ppid=1 pid=5887 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:43.532332 kernel: audit: type=1300 audit(1765887403.487:818): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5f79b50 a2=3 a3=0 items=0 ppid=1 pid=5887 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:43.487000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:43.541011 kernel: audit: type=1327 audit(1765887403.487:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:43.543249 systemd-logind[2131]: New session 16 of user core. Dec 16 12:16:43.550665 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:16:43.553000 audit[5887]: USER_START pid=5887 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.573000 audit[5891]: CRED_ACQ pid=5891 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.588352 kernel: audit: type=1105 audit(1765887403.553:819): pid=5887 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.588543 kernel: audit: type=1103 audit(1765887403.573:820): pid=5891 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.774919 sshd[5891]: Connection closed by 10.200.16.10 port 60644 Dec 16 12:16:43.776779 sshd-session[5887]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:43.777000 audit[5887]: USER_END pid=5887 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.800197 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:60644.service: Deactivated successfully. Dec 16 12:16:43.777000 audit[5887]: CRED_DISP pid=5887 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.804095 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:16:43.806627 systemd-logind[2131]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:16:43.809267 systemd-logind[2131]: Removed session 16. Dec 16 12:16:43.817014 kernel: audit: type=1106 audit(1765887403.777:821): pid=5887 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.817096 kernel: audit: type=1104 audit(1765887403.777:822): pid=5887 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:43.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.37:22-10.200.16.10:60644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:46.483090 kubelet[3702]: E1216 12:16:46.483038 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:16:47.486578 kubelet[3702]: E1216 12:16:47.486530 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:16:48.485234 kubelet[3702]: E1216 12:16:48.485111 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:16:48.486370 kubelet[3702]: E1216 12:16:48.486337 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:16:48.868711 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:60660.service - OpenSSH per-connection server daemon (10.200.16.10:60660). Dec 16 12:16:48.888504 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:16:48.888583 kernel: audit: type=1130 audit(1765887408.868:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.37:22-10.200.16.10:60660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:48.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.37:22-10.200.16.10:60660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:49.313000 audit[5907]: USER_ACCT pid=5907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.316545 sshd[5907]: Accepted publickey for core from 10.200.16.10 port 60660 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:16:49.329000 audit[5907]: CRED_ACQ pid=5907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.331963 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:49.345581 kernel: audit: type=1101 audit(1765887409.313:825): pid=5907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.345680 kernel: audit: type=1103 audit(1765887409.329:826): pid=5907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.358446 kernel: audit: type=1006 audit(1765887409.329:827): pid=5907 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:16:49.329000 audit[5907]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3360760 a2=3 a3=0 items=0 ppid=1 pid=5907 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.365225 systemd-logind[2131]: New session 17 of user core. Dec 16 12:16:49.377125 kernel: audit: type=1300 audit(1765887409.329:827): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3360760 a2=3 a3=0 items=0 ppid=1 pid=5907 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.329000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:49.384073 kernel: audit: type=1327 audit(1765887409.329:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:49.384735 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:16:49.389000 audit[5907]: USER_START pid=5907 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.409000 audit[5911]: CRED_ACQ pid=5911 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.423506 kernel: audit: type=1105 audit(1765887409.389:828): pid=5907 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.423628 kernel: audit: type=1103 audit(1765887409.409:829): pid=5911 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.634701 sshd[5911]: Connection closed by 10.200.16.10 port 60660 Dec 16 12:16:49.635109 sshd-session[5907]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:49.638000 audit[5907]: USER_END pid=5907 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.644623 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:60660.service: Deactivated successfully. Dec 16 12:16:49.647148 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:16:49.662700 systemd-logind[2131]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:16:49.638000 audit[5907]: CRED_DISP pid=5907 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.664405 systemd-logind[2131]: Removed session 17. Dec 16 12:16:49.678177 kernel: audit: type=1106 audit(1765887409.638:830): pid=5907 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.678284 kernel: audit: type=1104 audit(1765887409.638:831): pid=5907 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:49.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.37:22-10.200.16.10:60660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:51.487062 containerd[2156]: time="2025-12-16T12:16:51.486970876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:16:51.795813 containerd[2156]: time="2025-12-16T12:16:51.795570110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:51.799121 containerd[2156]: time="2025-12-16T12:16:51.799070284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:16:51.799208 containerd[2156]: time="2025-12-16T12:16:51.799159799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:51.799449 kubelet[3702]: E1216 12:16:51.799368 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:51.800293 kubelet[3702]: E1216 12:16:51.799815 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:51.802098 kubelet[3702]: E1216 12:16:51.801875 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b28635fdc3bf4ca29f7ba01fb74863fe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwb7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98b9677f6-82g96_calico-system(738872a1-0466-4442-a71b-b4f7bae6b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:51.804126 containerd[2156]: time="2025-12-16T12:16:51.804100691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:16:52.057533 containerd[2156]: time="2025-12-16T12:16:52.057264610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:52.061063 containerd[2156]: time="2025-12-16T12:16:52.060969383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:16:52.061183 containerd[2156]: time="2025-12-16T12:16:52.061005928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:52.061405 kubelet[3702]: E1216 12:16:52.061360 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:52.061587 kubelet[3702]: E1216 12:16:52.061419 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:52.061587 kubelet[3702]: E1216 12:16:52.061539 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwb7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-98b9677f6-82g96_calico-system(738872a1-0466-4442-a71b-b4f7bae6b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:52.062686 kubelet[3702]: E1216 12:16:52.062649 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:16:52.484088 containerd[2156]: time="2025-12-16T12:16:52.483707118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:16:52.774772 containerd[2156]: time="2025-12-16T12:16:52.774526077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:52.777747 containerd[2156]: time="2025-12-16T12:16:52.777708638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:16:52.777952 containerd[2156]: time="2025-12-16T12:16:52.777715574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:52.777985 kubelet[3702]: E1216 12:16:52.777926 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:52.777985 kubelet[3702]: E1216 12:16:52.777970 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:52.778123 kubelet[3702]: E1216 12:16:52.778090 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ltkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74755c97b7-lhmmb_calico-apiserver(94cf7c82-e4b2-4a6c-9fc8-83e906d8394a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:52.779563 kubelet[3702]: E1216 12:16:52.779534 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:16:54.716933 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:42852.service - OpenSSH per-connection server daemon (10.200.16.10:42852). Dec 16 12:16:54.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.37:22-10.200.16.10:42852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:54.720509 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:16:54.720590 kernel: audit: type=1130 audit(1765887414.715:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.37:22-10.200.16.10:42852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:55.138016 sshd[5968]: Accepted publickey for core from 10.200.16.10 port 42852 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:16:55.136000 audit[5968]: USER_ACCT pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.139442 sshd-session[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:55.137000 audit[5968]: CRED_ACQ pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.171080 kernel: audit: type=1101 audit(1765887415.136:834): pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.171222 kernel: audit: type=1103 audit(1765887415.137:835): pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.181505 kernel: audit: type=1006 audit(1765887415.137:836): pid=5968 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 12:16:55.137000 audit[5968]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca84c2d0 a2=3 a3=0 items=0 ppid=1 pid=5968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:55.186553 systemd-logind[2131]: New session 18 of user core. Dec 16 12:16:55.198810 kernel: audit: type=1300 audit(1765887415.137:836): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca84c2d0 a2=3 a3=0 items=0 ppid=1 pid=5968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:55.137000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:55.206044 kernel: audit: type=1327 audit(1765887415.137:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:55.209617 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:16:55.212000 audit[5968]: USER_START pid=5968 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.233000 audit[5972]: CRED_ACQ pid=5972 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.248672 kernel: audit: type=1105 audit(1765887415.212:837): pid=5968 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.248793 kernel: audit: type=1103 audit(1765887415.233:838): pid=5972 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.429987 sshd[5972]: Connection closed by 10.200.16.10 port 42852 Dec 16 12:16:55.431652 sshd-session[5968]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:55.432000 audit[5968]: USER_END pid=5968 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.437203 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:42852.service: Deactivated successfully. Dec 16 12:16:55.439445 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:16:55.458652 systemd-logind[2131]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:16:55.462047 systemd-logind[2131]: Removed session 18. Dec 16 12:16:55.432000 audit[5968]: CRED_DISP pid=5968 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.477240 kernel: audit: type=1106 audit(1765887415.432:839): pid=5968 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.477339 kernel: audit: type=1104 audit(1765887415.432:840): pid=5968 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:55.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.37:22-10.200.16.10:42852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:57.482629 containerd[2156]: time="2025-12-16T12:16:57.482329293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:16:57.714418 containerd[2156]: time="2025-12-16T12:16:57.714223256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:57.717501 containerd[2156]: time="2025-12-16T12:16:57.717376381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:16:57.717616 containerd[2156]: time="2025-12-16T12:16:57.717458530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:57.717755 kubelet[3702]: E1216 12:16:57.717709 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:57.718399 kubelet[3702]: E1216 12:16:57.717762 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:57.718399 kubelet[3702]: E1216 12:16:57.717882 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fblkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547b944668-qhgzg_calico-system(7de63821-b623-478a-a40e-6502071e35ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:57.720195 kubelet[3702]: E1216 12:16:57.720155 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:17:00.520300 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:34002.service - OpenSSH per-connection server daemon (10.200.16.10:34002). Dec 16 12:17:00.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.37:22-10.200.16.10:34002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:00.523854 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:00.523920 kernel: audit: type=1130 audit(1765887420.520:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.37:22-10.200.16.10:34002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:00.963000 audit[5985]: USER_ACCT pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.964492 sshd[5985]: Accepted publickey for core from 10.200.16.10 port 34002 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:00.981000 audit[5985]: CRED_ACQ pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.982229 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:00.995708 kernel: audit: type=1101 audit(1765887420.963:843): pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.995810 kernel: audit: type=1103 audit(1765887420.981:844): pid=5985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:00.997521 kernel: audit: type=1006 audit(1765887420.981:845): pid=5985 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 12:17:00.981000 audit[5985]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdafb30c0 a2=3 a3=0 items=0 ppid=1 pid=5985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:01.009732 systemd-logind[2131]: New session 19 of user core. Dec 16 12:17:01.023419 kernel: audit: type=1300 audit(1765887420.981:845): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdafb30c0 a2=3 a3=0 items=0 ppid=1 pid=5985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:00.981000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:01.030492 kernel: audit: type=1327 audit(1765887420.981:845): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:01.031697 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:17:01.034000 audit[5985]: USER_START pid=5985 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.052000 audit[5989]: CRED_ACQ pid=5989 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.067621 kernel: audit: type=1105 audit(1765887421.034:846): pid=5985 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.067738 kernel: audit: type=1103 audit(1765887421.052:847): pid=5989 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.264419 sshd[5989]: Connection closed by 10.200.16.10 port 34002 Dec 16 12:17:01.265418 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:01.267000 audit[5985]: USER_END pid=5985 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.271024 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:34002.service: Deactivated successfully. Dec 16 12:17:01.274363 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:17:01.287197 systemd-logind[2131]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:17:01.267000 audit[5985]: CRED_DISP pid=5985 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.288585 systemd-logind[2131]: Removed session 19. Dec 16 12:17:01.300916 kernel: audit: type=1106 audit(1765887421.267:848): pid=5985 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.300997 kernel: audit: type=1104 audit(1765887421.267:849): pid=5985 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.37:22-10.200.16.10:34002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:01.346580 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:34010.service - OpenSSH per-connection server daemon (10.200.16.10:34010). Dec 16 12:17:01.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.37:22-10.200.16.10:34010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:01.484480 containerd[2156]: time="2025-12-16T12:17:01.483093369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:17:01.733000 audit[6001]: USER_ACCT pid=6001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.734491 sshd[6001]: Accepted publickey for core from 10.200.16.10 port 34010 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:01.736084 sshd-session[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:01.734000 audit[6001]: CRED_ACQ pid=6001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.735000 audit[6001]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc5001c0 a2=3 a3=0 items=0 ppid=1 pid=6001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:01.735000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:01.740289 systemd-logind[2131]: New session 20 of user core. Dec 16 12:17:01.746735 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:17:01.750000 audit[6001]: USER_START pid=6001 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.752000 audit[6006]: CRED_ACQ pid=6006 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:01.827695 containerd[2156]: time="2025-12-16T12:17:01.827620546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:01.832131 containerd[2156]: time="2025-12-16T12:17:01.832089688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:17:01.832214 containerd[2156]: time="2025-12-16T12:17:01.832180829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:01.832890 kubelet[3702]: E1216 12:17:01.832850 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:01.833412 kubelet[3702]: E1216 12:17:01.832904 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:01.833412 kubelet[3702]: E1216 12:17:01.833110 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jl6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74755c97b7-c55sp_calico-apiserver(e0f8af17-5d0e-41d3-8143-e682bcff58c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:01.833988 containerd[2156]: time="2025-12-16T12:17:01.833701713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:17:01.834793 kubelet[3702]: E1216 12:17:01.834763 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:17:02.103124 containerd[2156]: time="2025-12-16T12:17:02.103053113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:02.108185 containerd[2156]: time="2025-12-16T12:17:02.108109551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:17:02.108380 containerd[2156]: time="2025-12-16T12:17:02.108341004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:02.108756 kubelet[3702]: E1216 12:17:02.108709 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:17:02.108756 kubelet[3702]: E1216 12:17:02.108759 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:17:02.109028 kubelet[3702]: E1216 12:17:02.108874 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgdkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7x78m_calico-system(c365101f-0c2a-4266-abb7-2136287ff3ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:02.110334 kubelet[3702]: E1216 12:17:02.110016 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:17:02.165214 sshd[6006]: Connection closed by 10.200.16.10 port 34010 Dec 16 12:17:02.163982 sshd-session[6001]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:02.165000 audit[6001]: USER_END pid=6001 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:02.165000 audit[6001]: CRED_DISP pid=6001 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:02.169849 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:34010.service: Deactivated successfully. Dec 16 12:17:02.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.37:22-10.200.16.10:34010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:02.172443 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:17:02.175433 systemd-logind[2131]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:17:02.176372 systemd-logind[2131]: Removed session 20. Dec 16 12:17:02.246694 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:34016.service - OpenSSH per-connection server daemon (10.200.16.10:34016). Dec 16 12:17:02.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.37:22-10.200.16.10:34016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:02.640000 audit[6016]: USER_ACCT pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:02.641761 sshd[6016]: Accepted publickey for core from 10.200.16.10 port 34016 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:02.643000 audit[6016]: CRED_ACQ pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:02.644000 audit[6016]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff15710d0 a2=3 a3=0 items=0 ppid=1 pid=6016 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:02.644000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:02.645198 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:02.649452 systemd-logind[2131]: New session 21 of user core. Dec 16 12:17:02.656807 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:17:02.658000 audit[6016]: USER_START pid=6016 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:02.660000 audit[6020]: CRED_ACQ pid=6020 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:03.374000 audit[6037]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=6037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:03.374000 audit[6037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdfb47680 a2=0 a3=1 items=0 ppid=3860 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:03.374000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:03.382000 audit[6037]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=6037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:03.382000 audit[6037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdfb47680 a2=0 a3=1 items=0 ppid=3860 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:03.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:03.410000 audit[6039]: NETFILTER_CFG table=filter:151 family=2 entries=38 op=nft_register_rule pid=6039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:03.410000 audit[6039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffe69e530 a2=0 a3=1 items=0 ppid=3860 pid=6039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:03.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:03.415000 audit[6039]: NETFILTER_CFG table=nat:152 family=2 entries=20 op=nft_register_rule pid=6039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:03.415000 audit[6039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffe69e530 a2=0 a3=1 items=0 ppid=3860 pid=6039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:03.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:03.440855 sshd[6020]: Connection closed by 10.200.16.10 port 34016 Dec 16 12:17:03.441630 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:03.444000 audit[6016]: USER_END pid=6016 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:03.444000 audit[6016]: CRED_DISP pid=6016 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:03.448927 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:34016.service: Deactivated successfully. Dec 16 12:17:03.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.37:22-10.200.16.10:34016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:03.452067 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:17:03.454370 systemd-logind[2131]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:17:03.455415 systemd-logind[2131]: Removed session 21. Dec 16 12:17:03.483261 containerd[2156]: time="2025-12-16T12:17:03.483216329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:17:03.533724 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:34020.service - OpenSSH per-connection server daemon (10.200.16.10:34020). Dec 16 12:17:03.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.37:22-10.200.16.10:34020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:03.723701 containerd[2156]: time="2025-12-16T12:17:03.722754474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:03.726258 containerd[2156]: time="2025-12-16T12:17:03.726211750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:17:03.726358 containerd[2156]: time="2025-12-16T12:17:03.726301939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:03.726567 kubelet[3702]: E1216 12:17:03.726456 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:17:03.726567 kubelet[3702]: E1216 12:17:03.726546 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:17:03.727209 kubelet[3702]: E1216 12:17:03.727025 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:03.730423 containerd[2156]: time="2025-12-16T12:17:03.730381887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:17:03.939000 audit[6044]: USER_ACCT pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:03.942034 sshd[6044]: Accepted publickey for core from 10.200.16.10 port 34020 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:03.942000 audit[6044]: CRED_ACQ pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:03.942000 audit[6044]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc95b62d0 a2=3 a3=0 items=0 ppid=1 pid=6044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:03.942000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:03.945142 sshd-session[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:03.950599 systemd-logind[2131]: New session 22 of user core. Dec 16 12:17:03.956766 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:17:03.959000 audit[6044]: USER_START pid=6044 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:03.960000 audit[6048]: CRED_ACQ pid=6048 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:03.994490 containerd[2156]: time="2025-12-16T12:17:03.994356432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:03.998215 containerd[2156]: time="2025-12-16T12:17:03.998167162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:17:03.998310 containerd[2156]: time="2025-12-16T12:17:03.998254151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:03.998681 kubelet[3702]: E1216 12:17:03.998449 3702 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:17:03.998681 kubelet[3702]: E1216 12:17:03.998516 3702 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:17:03.998681 kubelet[3702]: E1216 12:17:03.998635 3702 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lrs86_calico-system(77a7712d-2394-4a4f-8873-2dd27305d176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:03.999936 kubelet[3702]: E1216 12:17:03.999904 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:17:04.293580 sshd[6048]: Connection closed by 10.200.16.10 port 34020 Dec 16 12:17:04.294091 sshd-session[6044]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:04.296000 audit[6044]: USER_END pid=6044 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:04.296000 audit[6044]: CRED_DISP pid=6044 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:04.302270 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:34020.service: Deactivated successfully. Dec 16 12:17:04.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.37:22-10.200.16.10:34020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:04.305050 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:17:04.307175 systemd-logind[2131]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:17:04.310180 systemd-logind[2131]: Removed session 22. Dec 16 12:17:04.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.37:22-10.200.16.10:34026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:04.376856 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:34026.service - OpenSSH per-connection server daemon (10.200.16.10:34026). Dec 16 12:17:04.487127 kubelet[3702]: E1216 12:17:04.486943 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:17:04.770000 audit[6058]: USER_ACCT pid=6058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:04.771445 sshd[6058]: Accepted publickey for core from 10.200.16.10 port 34026 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:04.773000 audit[6058]: CRED_ACQ pid=6058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:04.773000 audit[6058]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc19083d0 a2=3 a3=0 items=0 ppid=1 pid=6058 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:04.773000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:04.774369 sshd-session[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:04.780815 systemd-logind[2131]: New session 23 of user core. Dec 16 12:17:04.785643 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:17:04.789000 audit[6058]: USER_START pid=6058 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:04.792000 audit[6062]: CRED_ACQ pid=6062 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:05.040446 sshd[6062]: Connection closed by 10.200.16.10 port 34026 Dec 16 12:17:05.041127 sshd-session[6058]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:05.041000 audit[6058]: USER_END pid=6058 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:05.042000 audit[6058]: CRED_DISP pid=6058 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:05.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.37:22-10.200.16.10:34026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:05.045834 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:34026.service: Deactivated successfully. Dec 16 12:17:05.048326 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:17:05.052094 systemd-logind[2131]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:17:05.053247 systemd-logind[2131]: Removed session 23. Dec 16 12:17:05.484325 kubelet[3702]: E1216 12:17:05.484272 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:17:08.574000 audit[6074]: NETFILTER_CFG table=filter:153 family=2 entries=26 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:08.579959 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:17:08.580047 kernel: audit: type=1325 audit(1765887428.574:891): table=filter:153 family=2 entries=26 op=nft_register_rule pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:08.574000 audit[6074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe4f258e0 a2=0 a3=1 items=0 ppid=3860 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:08.609797 kernel: audit: type=1300 audit(1765887428.574:891): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe4f258e0 a2=0 a3=1 items=0 ppid=3860 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:08.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:08.619580 kernel: audit: type=1327 audit(1765887428.574:891): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:08.589000 audit[6074]: NETFILTER_CFG table=nat:154 family=2 entries=104 op=nft_register_chain pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:08.630122 kernel: audit: type=1325 audit(1765887428.589:892): table=nat:154 family=2 entries=104 op=nft_register_chain pid=6074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:08.589000 audit[6074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe4f258e0 a2=0 a3=1 items=0 ppid=3860 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:08.649274 kernel: audit: type=1300 audit(1765887428.589:892): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe4f258e0 a2=0 a3=1 items=0 ppid=3860 pid=6074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:08.589000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:08.659705 kernel: audit: type=1327 audit(1765887428.589:892): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:09.484039 kubelet[3702]: E1216 12:17:09.483086 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:17:10.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.37:22-10.200.16.10:47288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:10.133713 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:47288.service - OpenSSH per-connection server daemon (10.200.16.10:47288). Dec 16 12:17:10.153506 kernel: audit: type=1130 audit(1765887430.132:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.37:22-10.200.16.10:47288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:10.563000 audit[6076]: USER_ACCT pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:10.565507 sshd[6076]: Accepted publickey for core from 10.200.16.10 port 47288 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:10.580732 sshd-session[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:10.578000 audit[6076]: CRED_ACQ pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:10.595268 kernel: audit: type=1101 audit(1765887430.563:894): pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:10.595331 kernel: audit: type=1103 audit(1765887430.578:895): pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:10.605482 kernel: audit: type=1006 audit(1765887430.578:896): pid=6076 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:17:10.578000 audit[6076]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0ebc350 a2=3 a3=0 items=0 ppid=1 pid=6076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:10.578000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:10.609987 systemd-logind[2131]: New session 24 of user core. Dec 16 12:17:10.613645 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:17:10.614000 audit[6076]: USER_START pid=6076 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:10.616000 audit[6080]: CRED_ACQ pid=6080 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:10.857395 sshd[6080]: Connection closed by 10.200.16.10 port 47288 Dec 16 12:17:10.857293 sshd-session[6076]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:10.856000 audit[6076]: USER_END pid=6076 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:10.857000 audit[6076]: CRED_DISP pid=6076 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:10.861347 systemd-logind[2131]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:17:10.861546 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:47288.service: Deactivated successfully. Dec 16 12:17:10.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.37:22-10.200.16.10:47288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:10.864321 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:17:10.866728 systemd-logind[2131]: Removed session 24. Dec 16 12:17:12.483110 kubelet[3702]: E1216 12:17:12.483058 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:17:12.484034 kubelet[3702]: E1216 12:17:12.483997 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:17:15.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.37:22-10.200.16.10:47304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:15.945702 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:47304.service - OpenSSH per-connection server daemon (10.200.16.10:47304). Dec 16 12:17:15.948846 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:17:15.948911 kernel: audit: type=1130 audit(1765887435.944:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.37:22-10.200.16.10:47304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:16.355000 audit[6094]: USER_ACCT pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.373575 sshd[6094]: Accepted publickey for core from 10.200.16.10 port 47304 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:16.375236 sshd-session[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:16.379844 systemd-logind[2131]: New session 25 of user core. Dec 16 12:17:16.372000 audit[6094]: CRED_ACQ pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.395812 kernel: audit: type=1101 audit(1765887436.355:903): pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.395894 kernel: audit: type=1103 audit(1765887436.372:904): pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.398695 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:17:16.406805 kernel: audit: type=1006 audit(1765887436.372:905): pid=6094 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 12:17:16.372000 audit[6094]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe447ded0 a2=3 a3=0 items=0 ppid=1 pid=6094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:16.427468 kernel: audit: type=1300 audit(1765887436.372:905): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe447ded0 a2=3 a3=0 items=0 ppid=1 pid=6094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:16.372000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:16.435717 kernel: audit: type=1327 audit(1765887436.372:905): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:16.408000 audit[6094]: USER_START pid=6094 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.455510 kernel: audit: type=1105 audit(1765887436.408:906): pid=6094 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.427000 audit[6098]: CRED_ACQ pid=6098 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.470990 kernel: audit: type=1103 audit(1765887436.427:907): pid=6098 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.484920 kubelet[3702]: E1216 12:17:16.484872 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:17:16.633519 sshd[6098]: Connection closed by 10.200.16.10 port 47304 Dec 16 12:17:16.634026 sshd-session[6094]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:16.633000 audit[6094]: USER_END pid=6094 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.638894 systemd-logind[2131]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:17:16.639942 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:47304.service: Deactivated successfully. Dec 16 12:17:16.643347 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:17:16.647542 systemd-logind[2131]: Removed session 25. Dec 16 12:17:16.634000 audit[6094]: CRED_DISP pid=6094 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.671137 kernel: audit: type=1106 audit(1765887436.633:908): pid=6094 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.671244 kernel: audit: type=1104 audit(1765887436.634:909): pid=6094 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:16.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.37:22-10.200.16.10:47304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:18.485273 kubelet[3702]: E1216 12:17:18.485020 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:17:19.483516 kubelet[3702]: E1216 12:17:19.483464 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:17:21.721987 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:48890.service - OpenSSH per-connection server daemon (10.200.16.10:48890). Dec 16 12:17:21.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.37:22-10.200.16.10:48890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:21.759724 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:21.759872 kernel: audit: type=1130 audit(1765887441.721:911): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.37:22-10.200.16.10:48890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:22.203000 audit[6133]: USER_ACCT pid=6133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.223330 sshd-session[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:22.223983 sshd[6133]: Accepted publickey for core from 10.200.16.10 port 48890 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:22.222000 audit[6133]: CRED_ACQ pid=6133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.240870 kernel: audit: type=1101 audit(1765887442.203:912): pid=6133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.240986 kernel: audit: type=1103 audit(1765887442.222:913): pid=6133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.251018 kernel: audit: type=1006 audit(1765887442.222:914): pid=6133 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 12:17:22.247614 systemd-logind[2131]: New session 26 of user core. Dec 16 12:17:22.222000 audit[6133]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff84a5750 a2=3 a3=0 items=0 ppid=1 pid=6133 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.222000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:22.276333 kernel: audit: type=1300 audit(1765887442.222:914): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff84a5750 a2=3 a3=0 items=0 ppid=1 pid=6133 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.276436 kernel: audit: type=1327 audit(1765887442.222:914): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:22.275742 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:17:22.278000 audit[6133]: USER_START pid=6133 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.278000 audit[6137]: CRED_ACQ pid=6137 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.314397 kernel: audit: type=1105 audit(1765887442.278:915): pid=6133 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.314584 kernel: audit: type=1103 audit(1765887442.278:916): pid=6137 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.520952 sshd[6137]: Connection closed by 10.200.16.10 port 48890 Dec 16 12:17:22.523429 sshd-session[6133]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:22.524000 audit[6133]: USER_END pid=6133 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.528296 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:48890.service: Deactivated successfully. Dec 16 12:17:22.529521 systemd-logind[2131]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:17:22.531447 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:17:22.533586 systemd-logind[2131]: Removed session 26. Dec 16 12:17:22.524000 audit[6133]: CRED_DISP pid=6133 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.560128 kernel: audit: type=1106 audit(1765887442.524:917): pid=6133 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.560250 kernel: audit: type=1104 audit(1765887442.524:918): pid=6133 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:22.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.37:22-10.200.16.10:48890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:24.482893 kubelet[3702]: E1216 12:17:24.482851 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547b944668-qhgzg" podUID="7de63821-b623-478a-a40e-6502071e35ea" Dec 16 12:17:25.483566 kubelet[3702]: E1216 12:17:25.483515 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4" Dec 16 12:17:25.483962 kubelet[3702]: E1216 12:17:25.483868 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7x78m" podUID="c365101f-0c2a-4266-abb7-2136287ff3ab" Dec 16 12:17:27.484008 kubelet[3702]: E1216 12:17:27.483642 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-lhmmb" podUID="94cf7c82-e4b2-4a6c-9fc8-83e906d8394a" Dec 16 12:17:27.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.37:22-10.200.16.10:48898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:27.607120 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:48898.service - OpenSSH per-connection server daemon (10.200.16.10:48898). Dec 16 12:17:27.610345 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:27.610443 kernel: audit: type=1130 audit(1765887447.605:920): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.37:22-10.200.16.10:48898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:28.012000 audit[6150]: USER_ACCT pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.031607 sshd[6150]: Accepted publickey for core from 10.200.16.10 port 48898 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:28.032972 sshd-session[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:28.030000 audit[6150]: CRED_ACQ pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.049604 kernel: audit: type=1101 audit(1765887448.012:921): pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.049686 kernel: audit: type=1103 audit(1765887448.030:922): pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.062929 kernel: audit: type=1006 audit(1765887448.030:923): pid=6150 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 12:17:28.030000 audit[6150]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4d89630 a2=3 a3=0 items=0 ppid=1 pid=6150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:28.068503 systemd-logind[2131]: New session 27 of user core. Dec 16 12:17:28.084669 kernel: audit: type=1300 audit(1765887448.030:923): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4d89630 a2=3 a3=0 items=0 ppid=1 pid=6150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:28.030000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:28.093008 kernel: audit: type=1327 audit(1765887448.030:923): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:28.094763 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:17:28.097000 audit[6150]: USER_START pid=6150 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.101000 audit[6154]: CRED_ACQ pid=6154 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.138166 kernel: audit: type=1105 audit(1765887448.097:924): pid=6150 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.138306 kernel: audit: type=1103 audit(1765887448.101:925): pid=6154 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.297598 sshd[6154]: Connection closed by 10.200.16.10 port 48898 Dec 16 12:17:28.297705 sshd-session[6150]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:28.297000 audit[6150]: USER_END pid=6150 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.303208 systemd-logind[2131]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:17:28.304594 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:48898.service: Deactivated successfully. Dec 16 12:17:28.307798 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:17:28.310233 systemd-logind[2131]: Removed session 27. Dec 16 12:17:28.297000 audit[6150]: CRED_DISP pid=6150 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.334883 kernel: audit: type=1106 audit(1765887448.297:926): pid=6150 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.334997 kernel: audit: type=1104 audit(1765887448.297:927): pid=6150 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:28.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.37:22-10.200.16.10:48898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:29.484313 kubelet[3702]: E1216 12:17:29.484218 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lrs86" podUID="77a7712d-2394-4a4f-8873-2dd27305d176" Dec 16 12:17:33.386691 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.16.10:37924.service - OpenSSH per-connection server daemon (10.200.16.10:37924). Dec 16 12:17:33.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.37:22-10.200.16.10:37924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:33.389989 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:17:33.390050 kernel: audit: type=1130 audit(1765887453.385:929): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.37:22-10.200.16.10:37924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:33.484348 kubelet[3702]: E1216 12:17:33.484307 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-98b9677f6-82g96" podUID="738872a1-0466-4442-a71b-b4f7bae6b427" Dec 16 12:17:33.833082 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 37924 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:17:33.831000 audit[6165]: USER_ACCT pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.834949 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:33.831000 audit[6165]: CRED_ACQ pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.865856 kernel: audit: type=1101 audit(1765887453.831:930): pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.865978 kernel: audit: type=1103 audit(1765887453.831:931): pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.876022 kernel: audit: type=1006 audit(1765887453.831:932): pid=6165 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 12:17:33.831000 audit[6165]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1a17f70 a2=3 a3=0 items=0 ppid=1 pid=6165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:33.893707 kernel: audit: type=1300 audit(1765887453.831:932): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1a17f70 a2=3 a3=0 items=0 ppid=1 pid=6165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:33.831000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:33.900703 kernel: audit: type=1327 audit(1765887453.831:932): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:17:33.905366 systemd-logind[2131]: New session 28 of user core. Dec 16 12:17:33.908636 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 12:17:33.910000 audit[6165]: USER_START pid=6165 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.929000 audit[6169]: CRED_ACQ pid=6169 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.945689 kernel: audit: type=1105 audit(1765887453.910:933): pid=6165 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:33.945785 kernel: audit: type=1103 audit(1765887453.929:934): pid=6169 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.151796 sshd[6169]: Connection closed by 10.200.16.10 port 37924 Dec 16 12:17:34.153664 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:34.154000 audit[6165]: USER_END pid=6165 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.159580 systemd[1]: sshd@24-10.200.20.37:22-10.200.16.10:37924.service: Deactivated successfully. Dec 16 12:17:34.163763 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 12:17:34.177392 systemd-logind[2131]: Session 28 logged out. Waiting for processes to exit. Dec 16 12:17:34.154000 audit[6165]: CRED_DISP pid=6165 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.192753 kernel: audit: type=1106 audit(1765887454.154:935): pid=6165 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.192850 kernel: audit: type=1104 audit(1765887454.154:936): pid=6165 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:17:34.193299 systemd-logind[2131]: Removed session 28. Dec 16 12:17:34.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.37:22-10.200.16.10:37924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:36.483729 kubelet[3702]: E1216 12:17:36.483575 3702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74755c97b7-c55sp" podUID="e0f8af17-5d0e-41d3-8143-e682bcff58c4"